00:00:00.002 Started by upstream project "autotest-per-patch" build number 132555 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.378 The recommended git tool is: git 00:00:02.378 using credential 00000000-0000-0000-0000-000000000002 00:00:02.381 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.394 Fetching changes from the remote Git repository 00:00:02.396 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.407 Using shallow fetch with depth 1 00:00:02.407 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.407 > git --version # timeout=10 00:00:02.418 > git --version # 'git version 2.39.2' 00:00:02.418 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.431 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.431 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.107 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.120 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.132 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.132 > git config core.sparsecheckout # timeout=10 00:00:09.143 > git read-tree -mu HEAD # timeout=10 00:00:09.161 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.186 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.186 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.298 [Pipeline] Start of Pipeline 00:00:09.309 [Pipeline] library 00:00:09.311 Loading library shm_lib@master 00:00:09.311 Library shm_lib@master is cached. Copying from home. 00:00:09.325 [Pipeline] node 00:00:09.336 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:09.338 [Pipeline] { 00:00:09.349 [Pipeline] catchError 00:00:09.351 [Pipeline] { 00:00:09.361 [Pipeline] wrap 00:00:09.368 [Pipeline] { 00:00:09.374 [Pipeline] stage 00:00:09.375 [Pipeline] { (Prologue) 00:00:09.391 [Pipeline] echo 00:00:09.392 Node: VM-host-WFP7 00:00:09.396 [Pipeline] cleanWs 00:00:09.405 [WS-CLEANUP] Deleting project workspace... 00:00:09.405 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.412 [WS-CLEANUP] done 00:00:09.651 [Pipeline] setCustomBuildProperty 00:00:09.742 [Pipeline] httpRequest 00:00:10.312 [Pipeline] echo 00:00:10.313 Sorcerer 10.211.164.20 is alive 00:00:10.321 [Pipeline] retry 00:00:10.323 [Pipeline] { 00:00:10.337 [Pipeline] httpRequest 00:00:10.341 HttpMethod: GET 00:00:10.342 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.342 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.367 Response Code: HTTP/1.1 200 OK 00:00:10.368 Success: Status code 200 is in the accepted range: 200,404 00:00:10.368 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.582 [Pipeline] } 00:00:33.599 [Pipeline] // retry 00:00:33.605 [Pipeline] sh 00:00:33.886 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.905 [Pipeline] httpRequest 00:00:34.282 [Pipeline] echo 00:00:34.284 Sorcerer 10.211.164.20 is alive 00:00:34.293 [Pipeline] retry 00:00:34.295 [Pipeline] { 00:00:34.308 [Pipeline] httpRequest 00:00:34.312 HttpMethod: GET 00:00:34.313 URL: http://10.211.164.20/packages/spdk_0836dccda7206a4b7a3073e9290926f61f5a497f.tar.gz 00:00:34.313 Sending request to url: http://10.211.164.20/packages/spdk_0836dccda7206a4b7a3073e9290926f61f5a497f.tar.gz 00:00:34.319 Response Code: HTTP/1.1 200 OK 00:00:34.319 Success: Status code 200 is in the accepted range: 200,404 00:00:34.320 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_0836dccda7206a4b7a3073e9290926f61f5a497f.tar.gz 00:03:15.641 [Pipeline] } 00:03:15.659 [Pipeline] // retry 00:03:15.668 [Pipeline] sh 00:03:15.949 + tar --no-same-owner -xf spdk_0836dccda7206a4b7a3073e9290926f61f5a497f.tar.gz 00:03:18.520 [Pipeline] sh 00:03:18.806 + git -C spdk log --oneline -n5 00:03:18.806 0836dccda bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:03:18.806 fb1630bf7 bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:03:18.806 67afc973b bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:03:18.806 16e5e505a bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:03:18.806 20b346609 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:03:18.824 [Pipeline] writeFile 00:03:18.842 [Pipeline] sh 00:03:19.127 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:19.141 [Pipeline] sh 00:03:19.430 + cat autorun-spdk.conf 00:03:19.430 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.430 SPDK_RUN_ASAN=1 00:03:19.430 SPDK_RUN_UBSAN=1 00:03:19.430 SPDK_TEST_RAID=1 00:03:19.430 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:19.438 RUN_NIGHTLY=0 00:03:19.440 [Pipeline] } 00:03:19.457 [Pipeline] // stage 00:03:19.474 [Pipeline] stage 00:03:19.476 [Pipeline] { (Run VM) 00:03:19.490 [Pipeline] sh 00:03:19.774 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:19.774 + echo 'Start stage prepare_nvme.sh' 00:03:19.775 Start stage prepare_nvme.sh 00:03:19.775 + [[ -n 1 ]] 00:03:19.775 + disk_prefix=ex1 00:03:19.775 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:03:19.775 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:03:19.775 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:03:19.775 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.775 ++ SPDK_RUN_ASAN=1 00:03:19.775 ++ SPDK_RUN_UBSAN=1 00:03:19.775 ++ SPDK_TEST_RAID=1 00:03:19.775 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:19.775 ++ RUN_NIGHTLY=0 00:03:19.775 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:03:19.775 + nvme_files=() 00:03:19.775 + declare -A nvme_files 00:03:19.775 + backend_dir=/var/lib/libvirt/images/backends 00:03:19.775 + nvme_files['nvme.img']=5G 00:03:19.775 + nvme_files['nvme-cmb.img']=5G 00:03:19.775 + nvme_files['nvme-multi0.img']=4G 00:03:19.775 + nvme_files['nvme-multi1.img']=4G 00:03:19.775 + nvme_files['nvme-multi2.img']=4G 00:03:19.775 + nvme_files['nvme-openstack.img']=8G 00:03:19.775 + nvme_files['nvme-zns.img']=5G 00:03:19.775 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:19.775 + (( SPDK_TEST_FTL == 1 )) 00:03:19.775 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:19.775 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:03:19.775 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:03:19.775 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:03:19.775 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:03:19.775 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:03:19.775 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:03:19.775 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:19.775 + for nvme in "${!nvme_files[@]}" 00:03:19.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:03:20.713 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:20.713 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:03:20.713 + echo 'End stage prepare_nvme.sh' 00:03:20.713 End stage prepare_nvme.sh 00:03:20.724 [Pipeline] sh 00:03:21.007 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:21.007 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:03:21.007 00:03:21.007 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:03:21.007 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:03:21.007 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:03:21.007 HELP=0 00:03:21.007 DRY_RUN=0 00:03:21.007 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:03:21.007 NVME_DISKS_TYPE=nvme,nvme, 00:03:21.007 NVME_AUTO_CREATE=0 00:03:21.007 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:03:21.007 NVME_CMB=,, 00:03:21.007 NVME_PMR=,, 00:03:21.007 NVME_ZNS=,, 00:03:21.007 NVME_MS=,, 00:03:21.007 NVME_FDP=,, 00:03:21.007 SPDK_VAGRANT_DISTRO=fedora39 00:03:21.007 SPDK_VAGRANT_VMCPU=10 00:03:21.007 SPDK_VAGRANT_VMRAM=12288 00:03:21.007 SPDK_VAGRANT_PROVIDER=libvirt 00:03:21.007 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:21.007 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:21.007 SPDK_OPENSTACK_NETWORK=0 00:03:21.007 VAGRANT_PACKAGE_BOX=0 00:03:21.007 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:03:21.007 FORCE_DISTRO=true 00:03:21.007 VAGRANT_BOX_VERSION= 00:03:21.007 EXTRA_VAGRANTFILES= 00:03:21.007 NIC_MODEL=virtio 00:03:21.007 00:03:21.007 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:03:21.007 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:03:23.548 Bringing machine 'default' up with 'libvirt' provider... 00:03:23.807 ==> default: Creating image (snapshot of base box volume). 00:03:24.066 ==> default: Creating domain with the following settings... 00:03:24.066 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732652177_493cdae6dbfa831cde79 00:03:24.066 ==> default: -- Domain type: kvm 00:03:24.066 ==> default: -- Cpus: 10 00:03:24.066 ==> default: -- Feature: acpi 00:03:24.066 ==> default: -- Feature: apic 00:03:24.066 ==> default: -- Feature: pae 00:03:24.066 ==> default: -- Memory: 12288M 00:03:24.066 ==> default: -- Memory Backing: hugepages: 00:03:24.066 ==> default: -- Management MAC: 00:03:24.066 ==> default: -- Loader: 00:03:24.066 ==> default: -- Nvram: 00:03:24.066 ==> default: -- Base box: spdk/fedora39 00:03:24.066 ==> default: -- Storage pool: default 00:03:24.066 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732652177_493cdae6dbfa831cde79.img (20G) 00:03:24.066 ==> default: -- Volume Cache: default 00:03:24.066 ==> default: -- Kernel: 00:03:24.066 ==> default: -- Initrd: 00:03:24.066 ==> default: -- Graphics Type: vnc 00:03:24.066 ==> default: -- Graphics Port: -1 00:03:24.066 ==> default: -- Graphics IP: 127.0.0.1 00:03:24.066 ==> default: -- Graphics Password: Not defined 00:03:24.066 ==> default: -- Video Type: cirrus 00:03:24.066 ==> default: -- Video VRAM: 9216 00:03:24.066 ==> default: -- Sound Type: 00:03:24.066 ==> default: -- Keymap: en-us 00:03:24.066 ==> default: -- TPM Path: 00:03:24.066 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:24.066 ==> default: -- Command line args: 00:03:24.066 ==> default: -> value=-device, 00:03:24.066 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:24.066 ==> default: -> value=-drive, 00:03:24.066 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:03:24.066 ==> default: -> value=-device, 00:03:24.066 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.066 ==> default: -> value=-device, 00:03:24.066 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:24.066 ==> default: -> value=-drive, 00:03:24.066 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:24.066 ==> default: -> value=-device, 00:03:24.066 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.066 ==> default: -> value=-drive, 00:03:24.066 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:24.066 ==> default: -> value=-device, 00:03:24.066 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.066 ==> default: -> value=-drive, 00:03:24.066 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:24.066 ==> default: -> value=-device, 00:03:24.066 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.326 ==> default: Creating shared folders metadata... 00:03:24.326 ==> default: Starting domain. 00:03:25.705 ==> default: Waiting for domain to get an IP address... 00:03:43.838 ==> default: Waiting for SSH to become available... 00:03:43.838 ==> default: Configuring and enabling network interfaces... 00:03:48.097 default: SSH address: 192.168.121.236:22 00:03:48.097 default: SSH username: vagrant 00:03:48.097 default: SSH auth method: private key 00:03:51.394 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:59.530 ==> default: Mounting SSHFS shared folder... 00:04:01.433 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:01.433 ==> default: Checking Mount.. 00:04:02.813 ==> default: Folder Successfully Mounted! 00:04:02.813 ==> default: Running provisioner: file... 00:04:03.754 default: ~/.gitconfig => .gitconfig 00:04:04.322 00:04:04.322 SUCCESS! 00:04:04.322 00:04:04.322 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:04:04.322 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:04.322 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:04:04.322 00:04:04.331 [Pipeline] } 00:04:04.346 [Pipeline] // stage 00:04:04.357 [Pipeline] dir 00:04:04.358 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:04:04.359 [Pipeline] { 00:04:04.371 [Pipeline] catchError 00:04:04.373 [Pipeline] { 00:04:04.386 [Pipeline] sh 00:04:04.669 + vagrant ssh-config --host vagrant 00:04:04.669 + sed -ne /^Host/,$p 00:04:04.669 + tee ssh_conf 00:04:07.964 Host vagrant 00:04:07.964 HostName 192.168.121.236 00:04:07.964 User vagrant 00:04:07.964 Port 22 00:04:07.964 UserKnownHostsFile /dev/null 00:04:07.964 StrictHostKeyChecking no 00:04:07.964 PasswordAuthentication no 00:04:07.964 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:07.964 IdentitiesOnly yes 00:04:07.964 LogLevel FATAL 00:04:07.964 ForwardAgent yes 00:04:07.964 ForwardX11 yes 00:04:07.964 00:04:07.978 [Pipeline] withEnv 00:04:07.980 [Pipeline] { 00:04:07.993 [Pipeline] sh 00:04:08.276 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:08.276 source /etc/os-release 00:04:08.276 [[ -e /image.version ]] && img=$(< /image.version) 00:04:08.276 # Minimal, systemd-like check. 00:04:08.276 if [[ -e /.dockerenv ]]; then 00:04:08.276 # Clear garbage from the node's name: 00:04:08.276 # agt-er_autotest_547-896 -> autotest_547-896 00:04:08.276 # $HOSTNAME is the actual container id 00:04:08.276 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:08.276 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:08.276 # We can assume this is a mount from a host where container is running, 00:04:08.276 # so fetch its hostname to easily identify the target swarm worker. 00:04:08.276 container="$(< /etc/hostname) ($agent)" 00:04:08.276 else 00:04:08.276 # Fallback 00:04:08.276 container=$agent 00:04:08.276 fi 00:04:08.276 fi 00:04:08.276 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:08.276 00:04:08.546 [Pipeline] } 00:04:08.564 [Pipeline] // withEnv 00:04:08.572 [Pipeline] setCustomBuildProperty 00:04:08.587 [Pipeline] stage 00:04:08.590 [Pipeline] { (Tests) 00:04:08.607 [Pipeline] sh 00:04:08.891 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:09.165 [Pipeline] sh 00:04:09.449 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:09.723 [Pipeline] timeout 00:04:09.723 Timeout set to expire in 1 hr 30 min 00:04:09.725 [Pipeline] { 00:04:09.740 [Pipeline] sh 00:04:10.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:10.591 HEAD is now at 0836dccda bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:04:10.604 [Pipeline] sh 00:04:10.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:11.195 [Pipeline] sh 00:04:11.480 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:11.754 [Pipeline] sh 00:04:12.037 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:12.296 ++ readlink -f spdk_repo 00:04:12.296 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:12.296 + [[ -n /home/vagrant/spdk_repo ]] 00:04:12.296 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:12.296 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:12.296 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:12.296 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:12.296 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:12.296 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:12.296 + cd /home/vagrant/spdk_repo 00:04:12.296 + source /etc/os-release 00:04:12.296 ++ NAME='Fedora Linux' 00:04:12.296 ++ VERSION='39 (Cloud Edition)' 00:04:12.296 ++ ID=fedora 00:04:12.296 ++ VERSION_ID=39 00:04:12.296 ++ VERSION_CODENAME= 00:04:12.296 ++ PLATFORM_ID=platform:f39 00:04:12.296 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:12.296 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:12.296 ++ LOGO=fedora-logo-icon 00:04:12.296 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:12.296 ++ HOME_URL=https://fedoraproject.org/ 00:04:12.296 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:12.296 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:12.296 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:12.296 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:12.296 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:12.296 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:12.296 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:12.296 ++ SUPPORT_END=2024-11-12 00:04:12.296 ++ VARIANT='Cloud Edition' 00:04:12.296 ++ VARIANT_ID=cloud 00:04:12.296 + uname -a 00:04:12.296 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:12.296 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.866 Hugepages 00:04:12.866 node hugesize free / total 00:04:12.866 node0 1048576kB 0 / 0 00:04:12.866 node0 2048kB 0 / 0 00:04:12.866 00:04:12.866 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.866 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:12.866 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:12.866 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:12.866 + rm -f /tmp/spdk-ld-path 00:04:12.866 + source autorun-spdk.conf 00:04:12.866 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:12.866 ++ SPDK_RUN_ASAN=1 00:04:12.866 ++ SPDK_RUN_UBSAN=1 00:04:12.866 ++ SPDK_TEST_RAID=1 00:04:12.866 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:12.866 ++ RUN_NIGHTLY=0 00:04:12.866 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:12.866 + [[ -n '' ]] 00:04:12.866 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:12.866 + for M in /var/spdk/build-*-manifest.txt 00:04:12.866 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:12.866 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:12.866 + for M in /var/spdk/build-*-manifest.txt 00:04:12.866 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:12.866 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:13.126 + for M in /var/spdk/build-*-manifest.txt 00:04:13.126 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:13.126 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:13.126 ++ uname 00:04:13.126 + [[ Linux == \L\i\n\u\x ]] 00:04:13.126 + sudo dmesg -T 00:04:13.126 + sudo dmesg --clear 00:04:13.126 + dmesg_pid=5433 00:04:13.126 + [[ Fedora Linux == FreeBSD ]] 00:04:13.126 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:13.126 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:13.126 + sudo dmesg -Tw 00:04:13.126 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:13.126 + [[ -x /usr/src/fio-static/fio ]] 00:04:13.126 + export FIO_BIN=/usr/src/fio-static/fio 00:04:13.126 + FIO_BIN=/usr/src/fio-static/fio 00:04:13.126 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:13.126 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:13.126 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:13.126 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:13.126 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:13.126 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:13.126 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:13.126 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:13.126 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:13.126 20:17:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:13.126 20:17:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:13.126 20:17:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:13.126 20:17:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:13.126 20:17:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:13.126 20:17:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:13.126 20:17:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:13.126 20:17:06 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:13.126 20:17:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:13.126 20:17:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:13.390 20:17:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:13.390 20:17:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.390 20:17:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:13.390 20:17:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:13.390 20:17:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.390 20:17:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.390 20:17:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.390 20:17:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.390 20:17:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.390 20:17:06 -- paths/export.sh@5 -- $ export PATH 00:04:13.391 20:17:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.391 20:17:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.391 20:17:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:13.391 20:17:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732652226.XXXXXX 00:04:13.391 20:17:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732652226.4HxlGG 00:04:13.391 20:17:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:13.391 20:17:06 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:13.391 20:17:06 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:13.391 20:17:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:13.391 20:17:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:13.391 20:17:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:13.391 20:17:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:13.391 20:17:06 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.391 20:17:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:13.391 20:17:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:13.391 20:17:06 -- pm/common@17 -- $ local monitor 00:04:13.391 20:17:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.391 20:17:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.391 20:17:06 -- pm/common@25 -- $ sleep 1 00:04:13.391 20:17:06 -- pm/common@21 -- $ date +%s 00:04:13.391 20:17:06 -- pm/common@21 -- $ date +%s 00:04:13.391 20:17:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652226 00:04:13.391 20:17:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652226 00:04:13.391 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652226_collect-cpu-load.pm.log 00:04:13.391 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652226_collect-vmstat.pm.log 00:04:14.331 20:17:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:14.331 20:17:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:14.331 20:17:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:14.331 20:17:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:14.331 20:17:07 -- spdk/autobuild.sh@16 -- $ date -u 00:04:14.331 Tue Nov 26 08:17:07 PM UTC 2024 00:04:14.331 20:17:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:14.331 v25.01-pre-245-g0836dccda 00:04:14.331 20:17:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:14.331 20:17:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:14.331 20:17:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:14.331 20:17:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:14.331 20:17:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.331 ************************************ 00:04:14.331 START TEST asan 00:04:14.331 ************************************ 00:04:14.331 using asan 00:04:14.331 20:17:07 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:14.331 00:04:14.331 real 0m0.001s 00:04:14.331 user 0m0.000s 00:04:14.331 sys 0m0.000s 00:04:14.331 20:17:07 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:14.331 20:17:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:14.331 ************************************ 00:04:14.331 END TEST asan 00:04:14.331 ************************************ 00:04:14.591 20:17:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:14.591 20:17:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:14.591 20:17:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:14.591 20:17:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:14.591 20:17:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.591 ************************************ 00:04:14.591 START TEST ubsan 00:04:14.591 ************************************ 00:04:14.591 using ubsan 00:04:14.591 20:17:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:14.591 00:04:14.591 real 0m0.000s 00:04:14.591 user 0m0.000s 00:04:14.591 sys 0m0.000s 00:04:14.591 20:17:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:14.591 20:17:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:14.591 ************************************ 00:04:14.591 END TEST ubsan 00:04:14.591 ************************************ 00:04:14.591 20:17:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:14.591 20:17:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:14.591 20:17:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:14.591 20:17:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:14.591 20:17:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:14.591 20:17:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:14.591 20:17:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:14.591 20:17:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:14.591 20:17:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:14.592 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:14.592 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:15.160 Using 'verbs' RDMA provider 00:04:31.002 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:45.921 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:46.180 Creating mk/config.mk...done. 00:04:46.180 Creating mk/cc.flags.mk...done. 00:04:46.180 Type 'make' to build. 00:04:46.180 20:17:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:46.180 20:17:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:46.180 20:17:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:46.180 20:17:39 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.180 ************************************ 00:04:46.180 START TEST make 00:04:46.180 ************************************ 00:04:46.180 20:17:39 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:46.748 make[1]: Nothing to be done for 'all'. 00:05:01.626 The Meson build system 00:05:01.626 Version: 1.5.0 00:05:01.626 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:01.626 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:01.626 Build type: native build 00:05:01.626 Program cat found: YES (/usr/bin/cat) 00:05:01.626 Project name: DPDK 00:05:01.626 Project version: 24.03.0 00:05:01.626 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:01.626 C linker for the host machine: cc ld.bfd 2.40-14 00:05:01.626 Host machine cpu family: x86_64 00:05:01.626 Host machine cpu: x86_64 00:05:01.626 Message: ## Building in Developer Mode ## 00:05:01.626 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:01.626 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:01.626 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:01.626 Program python3 found: YES (/usr/bin/python3) 00:05:01.626 Program cat found: YES (/usr/bin/cat) 00:05:01.626 Compiler for C supports arguments -march=native: YES 00:05:01.626 Checking for size of "void *" : 8 00:05:01.627 Checking for size of "void *" : 8 (cached) 00:05:01.627 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:01.627 Library m found: YES 00:05:01.627 Library numa found: YES 00:05:01.627 Has header "numaif.h" : YES 00:05:01.627 Library fdt found: NO 00:05:01.627 Library execinfo found: NO 00:05:01.627 Has header "execinfo.h" : YES 00:05:01.627 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:01.627 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:01.627 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:01.627 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:01.627 Run-time dependency openssl found: YES 3.1.1 00:05:01.627 Run-time dependency libpcap found: YES 1.10.4 00:05:01.627 Has header "pcap.h" with dependency libpcap: YES 00:05:01.627 Compiler for C supports arguments -Wcast-qual: YES 00:05:01.627 Compiler for C supports arguments -Wdeprecated: YES 00:05:01.627 Compiler for C supports arguments -Wformat: YES 00:05:01.627 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:01.627 Compiler for C supports arguments -Wformat-security: NO 00:05:01.627 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:01.627 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:01.627 Compiler for C supports arguments -Wnested-externs: YES 00:05:01.627 Compiler for C supports arguments -Wold-style-definition: YES 00:05:01.627 Compiler for C supports arguments -Wpointer-arith: YES 00:05:01.627 Compiler for C supports arguments -Wsign-compare: YES 00:05:01.627 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:01.627 Compiler for C supports arguments -Wundef: YES 00:05:01.627 Compiler for C supports arguments -Wwrite-strings: YES 00:05:01.627 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:01.627 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:01.627 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:01.627 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:01.627 Program objdump found: YES (/usr/bin/objdump) 00:05:01.627 Compiler for C supports arguments -mavx512f: YES 00:05:01.627 Checking if "AVX512 checking" compiles: YES 00:05:01.627 Fetching value of define "__SSE4_2__" : 1 00:05:01.627 Fetching value of define "__AES__" : 1 00:05:01.627 Fetching value of define "__AVX__" : 1 00:05:01.627 Fetching value of define "__AVX2__" : 1 00:05:01.627 Fetching value of define "__AVX512BW__" : 1 00:05:01.627 Fetching value of define "__AVX512CD__" : 1 00:05:01.627 Fetching value of define "__AVX512DQ__" : 1 00:05:01.627 Fetching value of define "__AVX512F__" : 1 00:05:01.627 Fetching value of define "__AVX512VL__" : 1 00:05:01.627 Fetching value of define "__PCLMUL__" : 1 00:05:01.627 Fetching value of define "__RDRND__" : 1 00:05:01.627 Fetching value of define "__RDSEED__" : 1 00:05:01.627 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:01.627 Fetching value of define "__znver1__" : (undefined) 00:05:01.627 Fetching value of define "__znver2__" : (undefined) 00:05:01.627 Fetching value of define "__znver3__" : (undefined) 00:05:01.627 Fetching value of define "__znver4__" : (undefined) 00:05:01.627 Library asan found: YES 00:05:01.627 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:01.627 Message: lib/log: Defining dependency "log" 00:05:01.627 Message: lib/kvargs: Defining dependency "kvargs" 00:05:01.627 Message: lib/telemetry: Defining dependency "telemetry" 00:05:01.627 Library rt found: YES 00:05:01.627 Checking for function "getentropy" : NO 00:05:01.627 Message: lib/eal: Defining dependency "eal" 00:05:01.627 Message: lib/ring: Defining dependency "ring" 00:05:01.627 Message: lib/rcu: Defining dependency "rcu" 00:05:01.627 Message: lib/mempool: Defining dependency "mempool" 00:05:01.627 Message: lib/mbuf: Defining dependency "mbuf" 00:05:01.627 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:01.627 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:01.627 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:01.627 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:01.627 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:01.627 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:01.627 Compiler for C supports arguments -mpclmul: YES 00:05:01.627 Compiler for C supports arguments -maes: YES 00:05:01.627 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:01.627 Compiler for C supports arguments -mavx512bw: YES 00:05:01.627 Compiler for C supports arguments -mavx512dq: YES 00:05:01.627 Compiler for C supports arguments -mavx512vl: YES 00:05:01.627 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:01.627 Compiler for C supports arguments -mavx2: YES 00:05:01.627 Compiler for C supports arguments -mavx: YES 00:05:01.627 Message: lib/net: Defining dependency "net" 00:05:01.627 Message: lib/meter: Defining dependency "meter" 00:05:01.627 Message: lib/ethdev: Defining dependency "ethdev" 00:05:01.627 Message: lib/pci: Defining dependency "pci" 00:05:01.627 Message: lib/cmdline: Defining dependency "cmdline" 00:05:01.627 Message: lib/hash: Defining dependency "hash" 00:05:01.627 Message: lib/timer: Defining dependency "timer" 00:05:01.627 Message: lib/compressdev: Defining dependency "compressdev" 00:05:01.627 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:01.627 Message: lib/dmadev: Defining dependency "dmadev" 00:05:01.627 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:01.627 Message: lib/power: Defining dependency "power" 00:05:01.627 Message: lib/reorder: Defining dependency "reorder" 00:05:01.627 Message: lib/security: Defining dependency "security" 00:05:01.627 Has header "linux/userfaultfd.h" : YES 00:05:01.627 Has header "linux/vduse.h" : YES 00:05:01.627 Message: lib/vhost: Defining dependency "vhost" 00:05:01.627 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:01.627 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:01.627 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:01.627 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:01.627 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:01.627 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:01.627 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:01.627 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:01.627 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:01.627 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:01.627 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:01.627 Configuring doxy-api-html.conf using configuration 00:05:01.627 Configuring doxy-api-man.conf using configuration 00:05:01.627 Program mandb found: YES (/usr/bin/mandb) 00:05:01.627 Program sphinx-build found: NO 00:05:01.627 Configuring rte_build_config.h using configuration 00:05:01.627 Message: 00:05:01.627 ================= 00:05:01.627 Applications Enabled 00:05:01.627 ================= 00:05:01.627 00:05:01.627 apps: 00:05:01.627 00:05:01.627 00:05:01.627 Message: 00:05:01.627 ================= 00:05:01.627 Libraries Enabled 00:05:01.627 ================= 00:05:01.627 00:05:01.627 libs: 00:05:01.627 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:01.627 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:01.627 cryptodev, dmadev, power, reorder, security, vhost, 00:05:01.627 00:05:01.627 Message: 00:05:01.627 =============== 00:05:01.627 Drivers Enabled 00:05:01.627 =============== 00:05:01.627 00:05:01.627 common: 00:05:01.627 00:05:01.627 bus: 00:05:01.627 pci, vdev, 00:05:01.627 mempool: 00:05:01.627 ring, 00:05:01.627 dma: 00:05:01.628 00:05:01.628 net: 00:05:01.628 00:05:01.628 crypto: 00:05:01.628 00:05:01.628 compress: 00:05:01.628 00:05:01.628 vdpa: 00:05:01.628 00:05:01.628 00:05:01.628 Message: 00:05:01.628 ================= 00:05:01.628 Content Skipped 00:05:01.628 ================= 00:05:01.628 00:05:01.628 apps: 00:05:01.628 dumpcap: explicitly disabled via build config 00:05:01.628 graph: explicitly disabled via build config 00:05:01.628 pdump: explicitly disabled via build config 00:05:01.628 proc-info: explicitly disabled via build config 00:05:01.628 test-acl: explicitly disabled via build config 00:05:01.628 test-bbdev: explicitly disabled via build config 00:05:01.628 test-cmdline: explicitly disabled via build config 00:05:01.628 test-compress-perf: explicitly disabled via build config 00:05:01.628 test-crypto-perf: explicitly disabled via build config 00:05:01.628 test-dma-perf: explicitly disabled via build config 00:05:01.628 test-eventdev: explicitly disabled via build config 00:05:01.628 test-fib: explicitly disabled via build config 00:05:01.628 test-flow-perf: explicitly disabled via build config 00:05:01.628 test-gpudev: explicitly disabled via build config 00:05:01.628 test-mldev: explicitly disabled via build config 00:05:01.628 test-pipeline: explicitly disabled via build config 00:05:01.628 test-pmd: explicitly disabled via build config 00:05:01.628 test-regex: explicitly disabled via build config 00:05:01.628 test-sad: explicitly disabled via build config 00:05:01.628 test-security-perf: explicitly disabled via build config 00:05:01.628 00:05:01.628 libs: 00:05:01.628 argparse: explicitly disabled via build config 00:05:01.628 metrics: explicitly disabled via build config 00:05:01.628 acl: explicitly disabled via build config 00:05:01.628 bbdev: explicitly disabled via build config 00:05:01.628 bitratestats: explicitly disabled via build config 00:05:01.628 bpf: explicitly disabled via build config 00:05:01.628 cfgfile: explicitly disabled via build config 00:05:01.628 distributor: explicitly disabled via build config 00:05:01.628 efd: explicitly disabled via build config 00:05:01.628 eventdev: explicitly disabled via build config 00:05:01.628 dispatcher: explicitly disabled via build config 00:05:01.628 gpudev: explicitly disabled via build config 00:05:01.628 gro: explicitly disabled via build config 00:05:01.628 gso: explicitly disabled via build config 00:05:01.628 ip_frag: explicitly disabled via build config 00:05:01.628 jobstats: explicitly disabled via build config 00:05:01.628 latencystats: explicitly disabled via build config 00:05:01.628 lpm: explicitly disabled via build config 00:05:01.628 member: explicitly disabled via build config 00:05:01.628 pcapng: explicitly disabled via build config 00:05:01.628 rawdev: explicitly disabled via build config 00:05:01.628 regexdev: explicitly disabled via build config 00:05:01.628 mldev: explicitly disabled via build config 00:05:01.628 rib: explicitly disabled via build config 00:05:01.628 sched: explicitly disabled via build config 00:05:01.628 stack: explicitly disabled via build config 00:05:01.628 ipsec: explicitly disabled via build config 00:05:01.628 pdcp: explicitly disabled via build config 00:05:01.628 fib: explicitly disabled via build config 00:05:01.628 port: explicitly disabled via build config 00:05:01.628 pdump: explicitly disabled via build config 00:05:01.628 table: explicitly disabled via build config 00:05:01.628 pipeline: explicitly disabled via build config 00:05:01.628 graph: explicitly disabled via build config 00:05:01.628 node: explicitly disabled via build config 00:05:01.628 00:05:01.628 drivers: 00:05:01.628 common/cpt: not in enabled drivers build config 00:05:01.628 common/dpaax: not in enabled drivers build config 00:05:01.628 common/iavf: not in enabled drivers build config 00:05:01.628 common/idpf: not in enabled drivers build config 00:05:01.628 common/ionic: not in enabled drivers build config 00:05:01.628 common/mvep: not in enabled drivers build config 00:05:01.628 common/octeontx: not in enabled drivers build config 00:05:01.628 bus/auxiliary: not in enabled drivers build config 00:05:01.628 bus/cdx: not in enabled drivers build config 00:05:01.628 bus/dpaa: not in enabled drivers build config 00:05:01.628 bus/fslmc: not in enabled drivers build config 00:05:01.628 bus/ifpga: not in enabled drivers build config 00:05:01.628 bus/platform: not in enabled drivers build config 00:05:01.628 bus/uacce: not in enabled drivers build config 00:05:01.628 bus/vmbus: not in enabled drivers build config 00:05:01.628 common/cnxk: not in enabled drivers build config 00:05:01.628 common/mlx5: not in enabled drivers build config 00:05:01.628 common/nfp: not in enabled drivers build config 00:05:01.628 common/nitrox: not in enabled drivers build config 00:05:01.628 common/qat: not in enabled drivers build config 00:05:01.628 common/sfc_efx: not in enabled drivers build config 00:05:01.628 mempool/bucket: not in enabled drivers build config 00:05:01.628 mempool/cnxk: not in enabled drivers build config 00:05:01.628 mempool/dpaa: not in enabled drivers build config 00:05:01.628 mempool/dpaa2: not in enabled drivers build config 00:05:01.628 mempool/octeontx: not in enabled drivers build config 00:05:01.628 mempool/stack: not in enabled drivers build config 00:05:01.628 dma/cnxk: not in enabled drivers build config 00:05:01.628 dma/dpaa: not in enabled drivers build config 00:05:01.628 dma/dpaa2: not in enabled drivers build config 00:05:01.628 dma/hisilicon: not in enabled drivers build config 00:05:01.628 dma/idxd: not in enabled drivers build config 00:05:01.628 dma/ioat: not in enabled drivers build config 00:05:01.628 dma/skeleton: not in enabled drivers build config 00:05:01.628 net/af_packet: not in enabled drivers build config 00:05:01.628 net/af_xdp: not in enabled drivers build config 00:05:01.628 net/ark: not in enabled drivers build config 00:05:01.628 net/atlantic: not in enabled drivers build config 00:05:01.628 net/avp: not in enabled drivers build config 00:05:01.628 net/axgbe: not in enabled drivers build config 00:05:01.628 net/bnx2x: not in enabled drivers build config 00:05:01.628 net/bnxt: not in enabled drivers build config 00:05:01.628 net/bonding: not in enabled drivers build config 00:05:01.628 net/cnxk: not in enabled drivers build config 00:05:01.628 net/cpfl: not in enabled drivers build config 00:05:01.628 net/cxgbe: not in enabled drivers build config 00:05:01.628 net/dpaa: not in enabled drivers build config 00:05:01.628 net/dpaa2: not in enabled drivers build config 00:05:01.628 net/e1000: not in enabled drivers build config 00:05:01.628 net/ena: not in enabled drivers build config 00:05:01.628 net/enetc: not in enabled drivers build config 00:05:01.628 net/enetfec: not in enabled drivers build config 00:05:01.628 net/enic: not in enabled drivers build config 00:05:01.628 net/failsafe: not in enabled drivers build config 00:05:01.628 net/fm10k: not in enabled drivers build config 00:05:01.628 net/gve: not in enabled drivers build config 00:05:01.628 net/hinic: not in enabled drivers build config 00:05:01.628 net/hns3: not in enabled drivers build config 00:05:01.628 net/i40e: not in enabled drivers build config 00:05:01.628 net/iavf: not in enabled drivers build config 00:05:01.628 net/ice: not in enabled drivers build config 00:05:01.628 net/idpf: not in enabled drivers build config 00:05:01.628 net/igc: not in enabled drivers build config 00:05:01.628 net/ionic: not in enabled drivers build config 00:05:01.628 net/ipn3ke: not in enabled drivers build config 00:05:01.629 net/ixgbe: not in enabled drivers build config 00:05:01.629 net/mana: not in enabled drivers build config 00:05:01.629 net/memif: not in enabled drivers build config 00:05:01.629 net/mlx4: not in enabled drivers build config 00:05:01.629 net/mlx5: not in enabled drivers build config 00:05:01.629 net/mvneta: not in enabled drivers build config 00:05:01.629 net/mvpp2: not in enabled drivers build config 00:05:01.629 net/netvsc: not in enabled drivers build config 00:05:01.629 net/nfb: not in enabled drivers build config 00:05:01.629 net/nfp: not in enabled drivers build config 00:05:01.629 net/ngbe: not in enabled drivers build config 00:05:01.629 net/null: not in enabled drivers build config 00:05:01.629 net/octeontx: not in enabled drivers build config 00:05:01.629 net/octeon_ep: not in enabled drivers build config 00:05:01.629 net/pcap: not in enabled drivers build config 00:05:01.629 net/pfe: not in enabled drivers build config 00:05:01.629 net/qede: not in enabled drivers build config 00:05:01.629 net/ring: not in enabled drivers build config 00:05:01.629 net/sfc: not in enabled drivers build config 00:05:01.629 net/softnic: not in enabled drivers build config 00:05:01.629 net/tap: not in enabled drivers build config 00:05:01.629 net/thunderx: not in enabled drivers build config 00:05:01.629 net/txgbe: not in enabled drivers build config 00:05:01.629 net/vdev_netvsc: not in enabled drivers build config 00:05:01.629 net/vhost: not in enabled drivers build config 00:05:01.629 net/virtio: not in enabled drivers build config 00:05:01.629 net/vmxnet3: not in enabled drivers build config 00:05:01.629 raw/*: missing internal dependency, "rawdev" 00:05:01.629 crypto/armv8: not in enabled drivers build config 00:05:01.629 crypto/bcmfs: not in enabled drivers build config 00:05:01.629 crypto/caam_jr: not in enabled drivers build config 00:05:01.629 crypto/ccp: not in enabled drivers build config 00:05:01.629 crypto/cnxk: not in enabled drivers build config 00:05:01.629 crypto/dpaa_sec: not in enabled drivers build config 00:05:01.629 crypto/dpaa2_sec: not in enabled drivers build config 00:05:01.629 crypto/ipsec_mb: not in enabled drivers build config 00:05:01.629 crypto/mlx5: not in enabled drivers build config 00:05:01.629 crypto/mvsam: not in enabled drivers build config 00:05:01.629 crypto/nitrox: not in enabled drivers build config 00:05:01.629 crypto/null: not in enabled drivers build config 00:05:01.629 crypto/octeontx: not in enabled drivers build config 00:05:01.629 crypto/openssl: not in enabled drivers build config 00:05:01.629 crypto/scheduler: not in enabled drivers build config 00:05:01.629 crypto/uadk: not in enabled drivers build config 00:05:01.629 crypto/virtio: not in enabled drivers build config 00:05:01.629 compress/isal: not in enabled drivers build config 00:05:01.629 compress/mlx5: not in enabled drivers build config 00:05:01.629 compress/nitrox: not in enabled drivers build config 00:05:01.629 compress/octeontx: not in enabled drivers build config 00:05:01.629 compress/zlib: not in enabled drivers build config 00:05:01.629 regex/*: missing internal dependency, "regexdev" 00:05:01.629 ml/*: missing internal dependency, "mldev" 00:05:01.629 vdpa/ifc: not in enabled drivers build config 00:05:01.629 vdpa/mlx5: not in enabled drivers build config 00:05:01.629 vdpa/nfp: not in enabled drivers build config 00:05:01.629 vdpa/sfc: not in enabled drivers build config 00:05:01.629 event/*: missing internal dependency, "eventdev" 00:05:01.629 baseband/*: missing internal dependency, "bbdev" 00:05:01.629 gpu/*: missing internal dependency, "gpudev" 00:05:01.629 00:05:01.629 00:05:01.629 Build targets in project: 85 00:05:01.629 00:05:01.629 DPDK 24.03.0 00:05:01.629 00:05:01.629 User defined options 00:05:01.629 buildtype : debug 00:05:01.629 default_library : shared 00:05:01.629 libdir : lib 00:05:01.629 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:01.629 b_sanitize : address 00:05:01.629 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:01.629 c_link_args : 00:05:01.629 cpu_instruction_set: native 00:05:01.629 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:01.629 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:01.629 enable_docs : false 00:05:01.629 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:01.629 enable_kmods : false 00:05:01.629 max_lcores : 128 00:05:01.629 tests : false 00:05:01.629 00:05:01.629 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:01.629 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:01.629 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:01.629 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:01.629 [3/268] Linking static target lib/librte_kvargs.a 00:05:01.629 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:01.629 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:01.629 [6/268] Linking static target lib/librte_log.a 00:05:01.629 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:01.629 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:01.629 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:01.629 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:01.629 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.629 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:01.629 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:01.629 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:01.629 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:01.629 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:01.629 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:01.629 [18/268] Linking static target lib/librte_telemetry.a 00:05:01.629 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.629 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:01.629 [21/268] Linking target lib/librte_log.so.24.1 00:05:01.629 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:01.629 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:01.888 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:01.888 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:01.888 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:01.888 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:01.888 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:01.888 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:01.888 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:02.146 [31/268] Linking target lib/librte_kvargs.so.24.1 00:05:02.146 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.146 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:02.404 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:02.404 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:02.404 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:02.404 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:02.404 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:02.404 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:02.662 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:02.662 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:02.662 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:02.662 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:02.662 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:02.662 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:02.662 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:02.921 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:02.921 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:03.179 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:03.179 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:03.179 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:03.437 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:03.437 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:03.437 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:03.437 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:03.437 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:03.695 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:03.696 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:03.696 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:03.955 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:03.955 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:03.955 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:03.955 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:03.955 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:03.955 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:04.214 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:04.214 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:04.473 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:04.473 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:04.473 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:04.473 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:04.473 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:04.732 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:04.732 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:04.732 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:04.732 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:04.732 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:05.003 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:05.003 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:05.003 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:05.003 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:05.261 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:05.520 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:05.520 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:05.520 [85/268] Linking static target lib/librte_ring.a 00:05:05.520 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:05.520 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:05.520 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:05.520 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:05.520 [90/268] Linking static target lib/librte_eal.a 00:05:05.777 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:05.777 [92/268] Linking static target lib/librte_mempool.a 00:05:05.777 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:06.035 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:06.035 [95/268] Linking static target lib/librte_rcu.a 00:05:06.035 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:06.035 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:06.035 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.294 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:06.294 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:06.294 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:06.552 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:06.552 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.552 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:06.552 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:06.552 [106/268] Linking static target lib/librte_mbuf.a 00:05:06.552 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:06.552 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:06.810 [109/268] Linking static target lib/librte_net.a 00:05:06.810 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:06.810 [111/268] Linking static target lib/librte_meter.a 00:05:07.069 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:07.069 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:07.069 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.328 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:07.328 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.328 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:07.328 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.586 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:07.843 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.843 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:08.102 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:08.102 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:08.102 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:08.360 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:08.360 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:08.360 [127/268] Linking static target lib/librte_pci.a 00:05:08.360 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:08.617 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:08.617 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:08.617 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:08.875 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.875 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:08.875 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:08.875 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:08.875 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:08.875 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:08.875 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:08.875 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:08.875 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:08.875 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:09.133 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:09.133 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:09.133 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:09.133 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:09.133 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:09.133 [147/268] Linking static target lib/librte_cmdline.a 00:05:09.392 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:09.392 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:09.651 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:09.651 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:09.651 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:09.651 [153/268] Linking static target lib/librte_timer.a 00:05:09.909 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:10.167 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:10.167 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:10.167 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:10.426 [158/268] Linking static target lib/librte_compressdev.a 00:05:10.426 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.426 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:10.426 [161/268] Linking static target lib/librte_hash.a 00:05:10.426 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:10.684 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:10.684 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:10.684 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:10.684 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:10.943 [167/268] Linking static target lib/librte_ethdev.a 00:05:10.943 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:10.943 [169/268] Linking static target lib/librte_dmadev.a 00:05:10.943 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.943 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:10.943 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:11.201 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:11.201 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:11.460 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.460 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:11.718 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:11.718 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:11.718 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:11.718 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.718 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:11.977 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.977 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:11.977 [184/268] Linking static target lib/librte_cryptodev.a 00:05:11.977 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:11.977 [186/268] Linking static target lib/librte_power.a 00:05:12.236 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:12.236 [188/268] Linking static target lib/librte_reorder.a 00:05:12.495 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:12.495 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:12.495 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:12.495 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:12.495 [193/268] Linking static target lib/librte_security.a 00:05:13.062 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.062 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:13.436 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.436 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.436 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:13.437 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:13.437 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:14.002 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:14.002 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:14.260 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:14.260 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:14.260 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:14.260 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:14.519 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:14.519 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:14.519 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:14.519 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:14.519 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.778 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:14.778 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:14.778 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:14.778 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:14.778 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:14.778 [217/268] Linking static target drivers/librte_bus_vdev.a 00:05:14.778 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:14.778 [219/268] Linking static target drivers/librte_bus_pci.a 00:05:15.036 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:15.036 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:15.294 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:15.294 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.294 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:15.294 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:15.294 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:15.552 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.929 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:17.187 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.187 [230/268] Linking target lib/librte_eal.so.24.1 00:05:17.467 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:17.467 [232/268] Linking target lib/librte_ring.so.24.1 00:05:17.467 [233/268] Linking target lib/librte_pci.so.24.1 00:05:17.467 [234/268] Linking target lib/librte_dmadev.so.24.1 00:05:17.467 [235/268] Linking target lib/librte_timer.so.24.1 00:05:17.467 [236/268] Linking target lib/librte_meter.so.24.1 00:05:17.467 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:17.467 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:17.467 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:17.727 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:17.727 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:17.727 [242/268] Linking target lib/librte_rcu.so.24.1 00:05:17.727 [243/268] Linking target lib/librte_mempool.so.24.1 00:05:17.727 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:17.727 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:17.727 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:17.727 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:17.727 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:17.727 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:17.987 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:17.987 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:05:17.987 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:17.987 [253/268] Linking target lib/librte_reorder.so.24.1 00:05:17.987 [254/268] Linking target lib/librte_net.so.24.1 00:05:18.245 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:18.245 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:18.245 [257/268] Linking target lib/librte_security.so.24.1 00:05:18.245 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:18.245 [259/268] Linking target lib/librte_hash.so.24.1 00:05:18.505 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:19.076 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.335 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:19.594 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:19.594 [264/268] Linking target lib/librte_power.so.24.1 00:05:22.132 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:22.132 [266/268] Linking static target lib/librte_vhost.a 00:05:24.053 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.053 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:24.053 INFO: autodetecting backend as ninja 00:05:24.053 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:42.157 CC lib/log/log_flags.o 00:05:42.157 CC lib/log/log_deprecated.o 00:05:42.157 CC lib/log/log.o 00:05:42.157 CC lib/ut_mock/mock.o 00:05:42.157 CC lib/ut/ut.o 00:05:42.157 LIB libspdk_log.a 00:05:42.157 LIB libspdk_ut_mock.a 00:05:42.157 LIB libspdk_ut.a 00:05:42.157 SO libspdk_log.so.7.1 00:05:42.157 SO libspdk_ut_mock.so.6.0 00:05:42.157 SO libspdk_ut.so.2.0 00:05:42.157 SYMLINK libspdk_log.so 00:05:42.157 SYMLINK libspdk_ut_mock.so 00:05:42.157 SYMLINK libspdk_ut.so 00:05:42.416 CC lib/util/base64.o 00:05:42.416 CC lib/util/cpuset.o 00:05:42.416 CC lib/util/bit_array.o 00:05:42.416 CC lib/util/crc16.o 00:05:42.416 CC lib/util/crc32c.o 00:05:42.416 CC lib/util/crc32.o 00:05:42.416 CC lib/ioat/ioat.o 00:05:42.416 CC lib/dma/dma.o 00:05:42.416 CXX lib/trace_parser/trace.o 00:05:42.674 CC lib/util/crc32_ieee.o 00:05:42.674 CC lib/util/crc64.o 00:05:42.674 CC lib/vfio_user/host/vfio_user_pci.o 00:05:42.674 CC lib/util/dif.o 00:05:42.674 CC lib/vfio_user/host/vfio_user.o 00:05:42.674 CC lib/util/fd.o 00:05:42.674 LIB libspdk_dma.a 00:05:42.674 SO libspdk_dma.so.5.0 00:05:42.674 CC lib/util/fd_group.o 00:05:42.674 CC lib/util/file.o 00:05:42.674 CC lib/util/hexlify.o 00:05:42.674 SYMLINK libspdk_dma.so 00:05:42.674 CC lib/util/iov.o 00:05:42.674 LIB libspdk_ioat.a 00:05:42.674 CC lib/util/math.o 00:05:42.674 SO libspdk_ioat.so.7.0 00:05:42.933 CC lib/util/net.o 00:05:42.933 CC lib/util/pipe.o 00:05:42.933 LIB libspdk_vfio_user.a 00:05:42.933 CC lib/util/strerror_tls.o 00:05:42.933 SYMLINK libspdk_ioat.so 00:05:42.933 CC lib/util/string.o 00:05:42.933 SO libspdk_vfio_user.so.5.0 00:05:42.933 CC lib/util/uuid.o 00:05:42.933 SYMLINK libspdk_vfio_user.so 00:05:42.933 CC lib/util/xor.o 00:05:42.933 CC lib/util/zipf.o 00:05:42.933 CC lib/util/md5.o 00:05:43.192 LIB libspdk_util.a 00:05:43.451 SO libspdk_util.so.10.1 00:05:43.451 LIB libspdk_trace_parser.a 00:05:43.451 SO libspdk_trace_parser.so.6.0 00:05:43.710 SYMLINK libspdk_util.so 00:05:43.710 SYMLINK libspdk_trace_parser.so 00:05:43.710 CC lib/conf/conf.o 00:05:43.710 CC lib/idxd/idxd.o 00:05:43.710 CC lib/json/json_parse.o 00:05:43.710 CC lib/idxd/idxd_user.o 00:05:43.710 CC lib/json/json_write.o 00:05:43.710 CC lib/idxd/idxd_kernel.o 00:05:43.710 CC lib/json/json_util.o 00:05:43.710 CC lib/rdma_utils/rdma_utils.o 00:05:43.710 CC lib/env_dpdk/env.o 00:05:43.710 CC lib/vmd/vmd.o 00:05:43.969 CC lib/env_dpdk/memory.o 00:05:43.969 LIB libspdk_conf.a 00:05:43.969 CC lib/env_dpdk/pci.o 00:05:43.969 CC lib/env_dpdk/init.o 00:05:43.969 SO libspdk_conf.so.6.0 00:05:43.969 CC lib/env_dpdk/threads.o 00:05:43.969 LIB libspdk_rdma_utils.a 00:05:44.227 SO libspdk_rdma_utils.so.1.0 00:05:44.227 SYMLINK libspdk_conf.so 00:05:44.227 CC lib/env_dpdk/pci_ioat.o 00:05:44.227 LIB libspdk_json.a 00:05:44.227 SYMLINK libspdk_rdma_utils.so 00:05:44.227 SO libspdk_json.so.6.0 00:05:44.227 CC lib/env_dpdk/pci_virtio.o 00:05:44.227 CC lib/env_dpdk/pci_vmd.o 00:05:44.227 SYMLINK libspdk_json.so 00:05:44.227 CC lib/env_dpdk/pci_idxd.o 00:05:44.227 CC lib/vmd/led.o 00:05:44.486 CC lib/rdma_provider/common.o 00:05:44.486 CC lib/jsonrpc/jsonrpc_server.o 00:05:44.486 CC lib/env_dpdk/pci_event.o 00:05:44.486 CC lib/env_dpdk/sigbus_handler.o 00:05:44.486 CC lib/env_dpdk/pci_dpdk.o 00:05:44.486 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:44.486 LIB libspdk_idxd.a 00:05:44.486 SO libspdk_idxd.so.12.1 00:05:44.486 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:44.486 LIB libspdk_vmd.a 00:05:44.486 SO libspdk_vmd.so.6.0 00:05:44.486 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:44.486 CC lib/jsonrpc/jsonrpc_client.o 00:05:44.486 SYMLINK libspdk_idxd.so 00:05:44.486 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:44.746 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:44.746 SYMLINK libspdk_vmd.so 00:05:44.746 LIB libspdk_rdma_provider.a 00:05:44.746 SO libspdk_rdma_provider.so.7.0 00:05:44.746 LIB libspdk_jsonrpc.a 00:05:45.005 SYMLINK libspdk_rdma_provider.so 00:05:45.005 SO libspdk_jsonrpc.so.6.0 00:05:45.005 SYMLINK libspdk_jsonrpc.so 00:05:45.574 CC lib/rpc/rpc.o 00:05:45.574 LIB libspdk_env_dpdk.a 00:05:45.574 SO libspdk_env_dpdk.so.15.1 00:05:45.574 LIB libspdk_rpc.a 00:05:45.836 SO libspdk_rpc.so.6.0 00:05:45.836 SYMLINK libspdk_env_dpdk.so 00:05:45.836 SYMLINK libspdk_rpc.so 00:05:46.099 CC lib/notify/notify.o 00:05:46.099 CC lib/notify/notify_rpc.o 00:05:46.099 CC lib/trace/trace.o 00:05:46.099 CC lib/trace/trace_flags.o 00:05:46.099 CC lib/trace/trace_rpc.o 00:05:46.099 CC lib/keyring/keyring_rpc.o 00:05:46.099 CC lib/keyring/keyring.o 00:05:46.358 LIB libspdk_notify.a 00:05:46.358 SO libspdk_notify.so.6.0 00:05:46.358 LIB libspdk_keyring.a 00:05:46.358 SYMLINK libspdk_notify.so 00:05:46.358 LIB libspdk_trace.a 00:05:46.358 SO libspdk_keyring.so.2.0 00:05:46.618 SO libspdk_trace.so.11.0 00:05:46.618 SYMLINK libspdk_keyring.so 00:05:46.618 SYMLINK libspdk_trace.so 00:05:46.877 CC lib/thread/thread.o 00:05:46.877 CC lib/thread/iobuf.o 00:05:46.877 CC lib/sock/sock.o 00:05:46.877 CC lib/sock/sock_rpc.o 00:05:47.446 LIB libspdk_sock.a 00:05:47.446 SO libspdk_sock.so.10.0 00:05:47.446 SYMLINK libspdk_sock.so 00:05:48.016 CC lib/nvme/nvme_ctrlr.o 00:05:48.016 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:48.016 CC lib/nvme/nvme_fabric.o 00:05:48.016 CC lib/nvme/nvme_ns_cmd.o 00:05:48.016 CC lib/nvme/nvme_ns.o 00:05:48.016 CC lib/nvme/nvme_pcie_common.o 00:05:48.016 CC lib/nvme/nvme.o 00:05:48.016 CC lib/nvme/nvme_pcie.o 00:05:48.016 CC lib/nvme/nvme_qpair.o 00:05:48.585 LIB libspdk_thread.a 00:05:48.585 SO libspdk_thread.so.11.0 00:05:48.585 CC lib/nvme/nvme_quirks.o 00:05:48.585 CC lib/nvme/nvme_transport.o 00:05:48.843 SYMLINK libspdk_thread.so 00:05:48.844 CC lib/nvme/nvme_discovery.o 00:05:48.844 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:48.844 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:48.844 CC lib/nvme/nvme_tcp.o 00:05:48.844 CC lib/nvme/nvme_opal.o 00:05:48.844 CC lib/nvme/nvme_io_msg.o 00:05:49.103 CC lib/nvme/nvme_poll_group.o 00:05:49.103 CC lib/nvme/nvme_zns.o 00:05:49.364 CC lib/nvme/nvme_stubs.o 00:05:49.364 CC lib/nvme/nvme_auth.o 00:05:49.364 CC lib/accel/accel.o 00:05:49.364 CC lib/nvme/nvme_cuse.o 00:05:49.624 CC lib/nvme/nvme_rdma.o 00:05:49.624 CC lib/accel/accel_rpc.o 00:05:49.624 CC lib/accel/accel_sw.o 00:05:49.884 CC lib/blob/blobstore.o 00:05:50.143 CC lib/init/json_config.o 00:05:50.143 CC lib/virtio/virtio.o 00:05:50.143 CC lib/blob/request.o 00:05:50.403 CC lib/init/subsystem.o 00:05:50.403 CC lib/blob/zeroes.o 00:05:50.403 CC lib/blob/blob_bs_dev.o 00:05:50.403 CC lib/virtio/virtio_vhost_user.o 00:05:50.403 CC lib/virtio/virtio_vfio_user.o 00:05:50.403 CC lib/init/subsystem_rpc.o 00:05:50.404 CC lib/init/rpc.o 00:05:50.664 CC lib/virtio/virtio_pci.o 00:05:50.664 CC lib/fsdev/fsdev.o 00:05:50.664 CC lib/fsdev/fsdev_io.o 00:05:50.664 CC lib/fsdev/fsdev_rpc.o 00:05:50.664 LIB libspdk_init.a 00:05:50.664 LIB libspdk_accel.a 00:05:50.664 SO libspdk_init.so.6.0 00:05:50.664 SO libspdk_accel.so.16.0 00:05:50.664 SYMLINK libspdk_init.so 00:05:50.664 SYMLINK libspdk_accel.so 00:05:50.924 LIB libspdk_virtio.a 00:05:50.924 SO libspdk_virtio.so.7.0 00:05:50.924 CC lib/event/app.o 00:05:50.924 CC lib/event/reactor.o 00:05:50.924 CC lib/event/log_rpc.o 00:05:50.924 CC lib/event/app_rpc.o 00:05:50.924 CC lib/event/scheduler_static.o 00:05:50.924 SYMLINK libspdk_virtio.so 00:05:50.924 CC lib/bdev/bdev.o 00:05:50.924 CC lib/bdev/bdev_rpc.o 00:05:50.924 LIB libspdk_nvme.a 00:05:51.185 CC lib/bdev/bdev_zone.o 00:05:51.185 CC lib/bdev/part.o 00:05:51.185 LIB libspdk_fsdev.a 00:05:51.185 SO libspdk_nvme.so.15.0 00:05:51.185 CC lib/bdev/scsi_nvme.o 00:05:51.185 SO libspdk_fsdev.so.2.0 00:05:51.445 SYMLINK libspdk_fsdev.so 00:05:51.445 LIB libspdk_event.a 00:05:51.445 SYMLINK libspdk_nvme.so 00:05:51.445 SO libspdk_event.so.14.0 00:05:51.705 SYMLINK libspdk_event.so 00:05:51.705 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:52.274 LIB libspdk_fuse_dispatcher.a 00:05:52.274 SO libspdk_fuse_dispatcher.so.1.0 00:05:52.533 SYMLINK libspdk_fuse_dispatcher.so 00:05:53.919 LIB libspdk_blob.a 00:05:53.919 SO libspdk_blob.so.12.0 00:05:53.919 SYMLINK libspdk_blob.so 00:05:53.919 LIB libspdk_bdev.a 00:05:54.178 SO libspdk_bdev.so.17.0 00:05:54.178 SYMLINK libspdk_bdev.so 00:05:54.178 CC lib/lvol/lvol.o 00:05:54.178 CC lib/blobfs/tree.o 00:05:54.178 CC lib/blobfs/blobfs.o 00:05:54.436 CC lib/nvmf/ctrlr.o 00:05:54.436 CC lib/nvmf/ctrlr_discovery.o 00:05:54.436 CC lib/nvmf/ctrlr_bdev.o 00:05:54.436 CC lib/scsi/dev.o 00:05:54.436 CC lib/ublk/ublk.o 00:05:54.436 CC lib/ftl/ftl_core.o 00:05:54.436 CC lib/nbd/nbd.o 00:05:54.436 CC lib/nbd/nbd_rpc.o 00:05:54.694 CC lib/ublk/ublk_rpc.o 00:05:54.695 CC lib/scsi/lun.o 00:05:54.695 CC lib/scsi/port.o 00:05:54.695 CC lib/ftl/ftl_init.o 00:05:54.953 LIB libspdk_nbd.a 00:05:54.953 SO libspdk_nbd.so.7.0 00:05:54.953 CC lib/ftl/ftl_layout.o 00:05:54.953 CC lib/ftl/ftl_debug.o 00:05:54.953 CC lib/scsi/scsi.o 00:05:54.953 SYMLINK libspdk_nbd.so 00:05:54.953 CC lib/scsi/scsi_bdev.o 00:05:54.953 CC lib/ftl/ftl_io.o 00:05:55.212 CC lib/ftl/ftl_sb.o 00:05:55.212 LIB libspdk_ublk.a 00:05:55.212 SO libspdk_ublk.so.3.0 00:05:55.212 LIB libspdk_blobfs.a 00:05:55.212 CC lib/nvmf/subsystem.o 00:05:55.212 CC lib/nvmf/nvmf.o 00:05:55.212 SO libspdk_blobfs.so.11.0 00:05:55.212 SYMLINK libspdk_ublk.so 00:05:55.212 CC lib/scsi/scsi_pr.o 00:05:55.212 CC lib/scsi/scsi_rpc.o 00:05:55.212 CC lib/scsi/task.o 00:05:55.212 SYMLINK libspdk_blobfs.so 00:05:55.212 CC lib/nvmf/nvmf_rpc.o 00:05:55.212 LIB libspdk_lvol.a 00:05:55.212 SO libspdk_lvol.so.11.0 00:05:55.212 CC lib/ftl/ftl_l2p.o 00:05:55.471 SYMLINK libspdk_lvol.so 00:05:55.471 CC lib/ftl/ftl_l2p_flat.o 00:05:55.471 CC lib/ftl/ftl_nv_cache.o 00:05:55.471 CC lib/ftl/ftl_band.o 00:05:55.471 CC lib/nvmf/transport.o 00:05:55.471 CC lib/ftl/ftl_band_ops.o 00:05:55.730 LIB libspdk_scsi.a 00:05:55.730 CC lib/ftl/ftl_writer.o 00:05:55.730 SO libspdk_scsi.so.9.0 00:05:55.730 SYMLINK libspdk_scsi.so 00:05:55.730 CC lib/ftl/ftl_rq.o 00:05:55.988 CC lib/ftl/ftl_reloc.o 00:05:55.988 CC lib/ftl/ftl_l2p_cache.o 00:05:55.988 CC lib/ftl/ftl_p2l.o 00:05:55.988 CC lib/iscsi/conn.o 00:05:56.247 CC lib/ftl/ftl_p2l_log.o 00:05:56.247 CC lib/ftl/mngt/ftl_mngt.o 00:05:56.247 CC lib/iscsi/init_grp.o 00:05:56.507 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:56.507 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:56.507 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:56.507 CC lib/vhost/vhost.o 00:05:56.507 CC lib/vhost/vhost_rpc.o 00:05:56.766 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:56.766 CC lib/iscsi/iscsi.o 00:05:56.766 CC lib/iscsi/param.o 00:05:56.766 CC lib/iscsi/portal_grp.o 00:05:56.766 CC lib/iscsi/tgt_node.o 00:05:56.766 CC lib/nvmf/tcp.o 00:05:56.766 CC lib/vhost/vhost_scsi.o 00:05:56.766 CC lib/vhost/vhost_blk.o 00:05:57.025 CC lib/iscsi/iscsi_subsystem.o 00:05:57.025 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:57.025 CC lib/nvmf/stubs.o 00:05:57.284 CC lib/nvmf/mdns_server.o 00:05:57.284 CC lib/vhost/rte_vhost_user.o 00:05:57.284 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:57.544 CC lib/iscsi/iscsi_rpc.o 00:05:57.544 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:57.544 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:57.544 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:57.803 CC lib/nvmf/rdma.o 00:05:57.803 CC lib/iscsi/task.o 00:05:57.803 CC lib/nvmf/auth.o 00:05:57.803 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:57.803 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:57.803 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:57.803 CC lib/ftl/utils/ftl_conf.o 00:05:58.062 CC lib/ftl/utils/ftl_md.o 00:05:58.062 CC lib/ftl/utils/ftl_mempool.o 00:05:58.062 CC lib/ftl/utils/ftl_bitmap.o 00:05:58.062 CC lib/ftl/utils/ftl_property.o 00:05:58.322 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:58.322 LIB libspdk_iscsi.a 00:05:58.322 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:58.322 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:58.322 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:58.322 SO libspdk_iscsi.so.8.0 00:05:58.322 LIB libspdk_vhost.a 00:05:58.581 SO libspdk_vhost.so.8.0 00:05:58.581 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:58.581 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:58.581 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:58.581 SYMLINK libspdk_iscsi.so 00:05:58.581 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:58.581 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:58.581 SYMLINK libspdk_vhost.so 00:05:58.581 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:58.581 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:58.581 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:58.581 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:58.840 CC lib/ftl/base/ftl_base_dev.o 00:05:58.840 CC lib/ftl/base/ftl_base_bdev.o 00:05:58.840 CC lib/ftl/ftl_trace.o 00:05:59.099 LIB libspdk_ftl.a 00:05:59.358 SO libspdk_ftl.so.9.0 00:05:59.616 SYMLINK libspdk_ftl.so 00:06:00.551 LIB libspdk_nvmf.a 00:06:00.551 SO libspdk_nvmf.so.20.0 00:06:00.811 SYMLINK libspdk_nvmf.so 00:06:01.069 CC module/env_dpdk/env_dpdk_rpc.o 00:06:01.327 CC module/fsdev/aio/fsdev_aio.o 00:06:01.327 CC module/accel/dsa/accel_dsa.o 00:06:01.327 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:01.327 CC module/accel/ioat/accel_ioat.o 00:06:01.327 CC module/keyring/file/keyring.o 00:06:01.327 CC module/accel/error/accel_error.o 00:06:01.327 CC module/sock/posix/posix.o 00:06:01.327 CC module/accel/iaa/accel_iaa.o 00:06:01.327 CC module/blob/bdev/blob_bdev.o 00:06:01.327 LIB libspdk_env_dpdk_rpc.a 00:06:01.327 SO libspdk_env_dpdk_rpc.so.6.0 00:06:01.327 SYMLINK libspdk_env_dpdk_rpc.so 00:06:01.327 CC module/accel/iaa/accel_iaa_rpc.o 00:06:01.327 CC module/keyring/file/keyring_rpc.o 00:06:01.586 LIB libspdk_scheduler_dynamic.a 00:06:01.586 CC module/accel/ioat/accel_ioat_rpc.o 00:06:01.586 CC module/accel/error/accel_error_rpc.o 00:06:01.586 SO libspdk_scheduler_dynamic.so.4.0 00:06:01.586 LIB libspdk_accel_iaa.a 00:06:01.586 SO libspdk_accel_iaa.so.3.0 00:06:01.586 SYMLINK libspdk_scheduler_dynamic.so 00:06:01.586 CC module/accel/dsa/accel_dsa_rpc.o 00:06:01.586 LIB libspdk_keyring_file.a 00:06:01.586 SO libspdk_keyring_file.so.2.0 00:06:01.586 LIB libspdk_blob_bdev.a 00:06:01.586 LIB libspdk_accel_ioat.a 00:06:01.586 LIB libspdk_accel_error.a 00:06:01.586 SYMLINK libspdk_accel_iaa.so 00:06:01.586 SO libspdk_blob_bdev.so.12.0 00:06:01.586 SO libspdk_accel_ioat.so.6.0 00:06:01.586 SO libspdk_accel_error.so.2.0 00:06:01.586 CC module/keyring/linux/keyring.o 00:06:01.586 SYMLINK libspdk_keyring_file.so 00:06:01.586 CC module/keyring/linux/keyring_rpc.o 00:06:01.844 SYMLINK libspdk_blob_bdev.so 00:06:01.844 SYMLINK libspdk_accel_ioat.so 00:06:01.844 LIB libspdk_accel_dsa.a 00:06:01.844 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:01.844 SYMLINK libspdk_accel_error.so 00:06:01.844 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:01.844 SO libspdk_accel_dsa.so.5.0 00:06:01.844 CC module/scheduler/gscheduler/gscheduler.o 00:06:01.844 SYMLINK libspdk_accel_dsa.so 00:06:01.844 CC module/fsdev/aio/linux_aio_mgr.o 00:06:01.844 LIB libspdk_keyring_linux.a 00:06:01.844 SO libspdk_keyring_linux.so.1.0 00:06:01.844 LIB libspdk_scheduler_dpdk_governor.a 00:06:01.844 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:01.844 SYMLINK libspdk_keyring_linux.so 00:06:02.102 LIB libspdk_scheduler_gscheduler.a 00:06:02.102 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:02.102 CC module/bdev/delay/vbdev_delay.o 00:06:02.102 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:02.102 CC module/blobfs/bdev/blobfs_bdev.o 00:06:02.102 SO libspdk_scheduler_gscheduler.so.4.0 00:06:02.102 CC module/bdev/error/vbdev_error.o 00:06:02.102 LIB libspdk_fsdev_aio.a 00:06:02.102 SYMLINK libspdk_scheduler_gscheduler.so 00:06:02.102 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:02.102 SO libspdk_fsdev_aio.so.1.0 00:06:02.102 CC module/bdev/gpt/gpt.o 00:06:02.102 LIB libspdk_sock_posix.a 00:06:02.102 CC module/bdev/error/vbdev_error_rpc.o 00:06:02.102 SO libspdk_sock_posix.so.6.0 00:06:02.102 CC module/bdev/lvol/vbdev_lvol.o 00:06:02.102 SYMLINK libspdk_fsdev_aio.so 00:06:02.102 CC module/bdev/gpt/vbdev_gpt.o 00:06:02.359 CC module/bdev/malloc/bdev_malloc.o 00:06:02.359 LIB libspdk_blobfs_bdev.a 00:06:02.359 SYMLINK libspdk_sock_posix.so 00:06:02.359 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:02.359 SO libspdk_blobfs_bdev.so.6.0 00:06:02.359 CC module/bdev/null/bdev_null.o 00:06:02.359 CC module/bdev/null/bdev_null_rpc.o 00:06:02.359 SYMLINK libspdk_blobfs_bdev.so 00:06:02.359 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:02.359 LIB libspdk_bdev_error.a 00:06:02.359 LIB libspdk_bdev_delay.a 00:06:02.359 SO libspdk_bdev_error.so.6.0 00:06:02.359 SO libspdk_bdev_delay.so.6.0 00:06:02.617 SYMLINK libspdk_bdev_error.so 00:06:02.617 LIB libspdk_bdev_gpt.a 00:06:02.617 SYMLINK libspdk_bdev_delay.so 00:06:02.617 CC module/bdev/nvme/bdev_nvme.o 00:06:02.617 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:02.617 SO libspdk_bdev_gpt.so.6.0 00:06:02.617 CC module/bdev/nvme/nvme_rpc.o 00:06:02.617 SYMLINK libspdk_bdev_gpt.so 00:06:02.617 CC module/bdev/passthru/vbdev_passthru.o 00:06:02.617 CC module/bdev/raid/bdev_raid.o 00:06:02.617 LIB libspdk_bdev_null.a 00:06:02.617 LIB libspdk_bdev_malloc.a 00:06:02.617 SO libspdk_bdev_null.so.6.0 00:06:02.874 SO libspdk_bdev_malloc.so.6.0 00:06:02.874 SYMLINK libspdk_bdev_null.so 00:06:02.874 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:02.874 CC module/bdev/raid/bdev_raid_rpc.o 00:06:02.874 CC module/bdev/split/vbdev_split.o 00:06:02.874 SYMLINK libspdk_bdev_malloc.so 00:06:02.874 CC module/bdev/raid/bdev_raid_sb.o 00:06:02.874 LIB libspdk_bdev_lvol.a 00:06:02.874 SO libspdk_bdev_lvol.so.6.0 00:06:02.874 SYMLINK libspdk_bdev_lvol.so 00:06:02.874 CC module/bdev/raid/raid0.o 00:06:02.874 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:02.874 CC module/bdev/raid/raid1.o 00:06:02.874 LIB libspdk_bdev_passthru.a 00:06:03.132 SO libspdk_bdev_passthru.so.6.0 00:06:03.132 CC module/bdev/split/vbdev_split_rpc.o 00:06:03.132 CC module/bdev/raid/concat.o 00:06:03.132 SYMLINK libspdk_bdev_passthru.so 00:06:03.132 CC module/bdev/nvme/bdev_mdns_client.o 00:06:03.132 CC module/bdev/raid/raid5f.o 00:06:03.132 CC module/bdev/nvme/vbdev_opal.o 00:06:03.132 LIB libspdk_bdev_split.a 00:06:03.132 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:03.132 SO libspdk_bdev_split.so.6.0 00:06:03.391 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:03.391 SYMLINK libspdk_bdev_split.so 00:06:03.391 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:03.649 CC module/bdev/aio/bdev_aio.o 00:06:03.649 CC module/bdev/aio/bdev_aio_rpc.o 00:06:03.649 LIB libspdk_bdev_zone_block.a 00:06:03.649 CC module/bdev/ftl/bdev_ftl.o 00:06:03.649 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:03.649 CC module/bdev/iscsi/bdev_iscsi.o 00:06:03.649 SO libspdk_bdev_zone_block.so.6.0 00:06:03.649 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:03.649 SYMLINK libspdk_bdev_zone_block.so 00:06:03.649 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:03.649 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:03.649 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:03.906 LIB libspdk_bdev_ftl.a 00:06:03.906 LIB libspdk_bdev_raid.a 00:06:03.906 LIB libspdk_bdev_aio.a 00:06:03.906 SO libspdk_bdev_ftl.so.6.0 00:06:03.906 SO libspdk_bdev_raid.so.6.0 00:06:03.906 SO libspdk_bdev_aio.so.6.0 00:06:03.906 SYMLINK libspdk_bdev_ftl.so 00:06:03.906 LIB libspdk_bdev_iscsi.a 00:06:04.162 SYMLINK libspdk_bdev_aio.so 00:06:04.162 SYMLINK libspdk_bdev_raid.so 00:06:04.162 SO libspdk_bdev_iscsi.so.6.0 00:06:04.162 SYMLINK libspdk_bdev_iscsi.so 00:06:04.162 LIB libspdk_bdev_virtio.a 00:06:04.419 SO libspdk_bdev_virtio.so.6.0 00:06:04.419 SYMLINK libspdk_bdev_virtio.so 00:06:05.791 LIB libspdk_bdev_nvme.a 00:06:05.791 SO libspdk_bdev_nvme.so.7.1 00:06:05.791 SYMLINK libspdk_bdev_nvme.so 00:06:06.730 CC module/event/subsystems/keyring/keyring.o 00:06:06.730 CC module/event/subsystems/vmd/vmd.o 00:06:06.730 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:06.730 CC module/event/subsystems/scheduler/scheduler.o 00:06:06.730 CC module/event/subsystems/iobuf/iobuf.o 00:06:06.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:06.730 CC module/event/subsystems/sock/sock.o 00:06:06.730 CC module/event/subsystems/fsdev/fsdev.o 00:06:06.730 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:06.730 LIB libspdk_event_scheduler.a 00:06:06.730 LIB libspdk_event_fsdev.a 00:06:06.730 LIB libspdk_event_keyring.a 00:06:06.730 LIB libspdk_event_sock.a 00:06:06.730 LIB libspdk_event_vhost_blk.a 00:06:06.730 SO libspdk_event_scheduler.so.4.0 00:06:06.730 SO libspdk_event_fsdev.so.1.0 00:06:06.730 SO libspdk_event_keyring.so.1.0 00:06:06.730 SO libspdk_event_sock.so.5.0 00:06:06.730 SO libspdk_event_vhost_blk.so.3.0 00:06:06.730 LIB libspdk_event_vmd.a 00:06:06.730 SYMLINK libspdk_event_scheduler.so 00:06:06.730 SYMLINK libspdk_event_fsdev.so 00:06:06.730 SYMLINK libspdk_event_keyring.so 00:06:06.730 LIB libspdk_event_iobuf.a 00:06:06.730 SO libspdk_event_vmd.so.6.0 00:06:06.730 SYMLINK libspdk_event_vhost_blk.so 00:06:06.730 SYMLINK libspdk_event_sock.so 00:06:06.730 SO libspdk_event_iobuf.so.3.0 00:06:06.730 SYMLINK libspdk_event_vmd.so 00:06:06.990 SYMLINK libspdk_event_iobuf.so 00:06:07.250 CC module/event/subsystems/accel/accel.o 00:06:07.510 LIB libspdk_event_accel.a 00:06:07.510 SO libspdk_event_accel.so.6.0 00:06:07.510 SYMLINK libspdk_event_accel.so 00:06:08.078 CC module/event/subsystems/bdev/bdev.o 00:06:08.078 LIB libspdk_event_bdev.a 00:06:08.337 SO libspdk_event_bdev.so.6.0 00:06:08.337 SYMLINK libspdk_event_bdev.so 00:06:08.596 CC module/event/subsystems/scsi/scsi.o 00:06:08.596 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:08.596 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:08.596 CC module/event/subsystems/nbd/nbd.o 00:06:08.596 CC module/event/subsystems/ublk/ublk.o 00:06:08.856 LIB libspdk_event_ublk.a 00:06:08.856 LIB libspdk_event_nbd.a 00:06:08.856 SO libspdk_event_ublk.so.3.0 00:06:08.856 SO libspdk_event_nbd.so.6.0 00:06:08.856 LIB libspdk_event_scsi.a 00:06:08.856 SYMLINK libspdk_event_ublk.so 00:06:08.856 SO libspdk_event_scsi.so.6.0 00:06:08.856 SYMLINK libspdk_event_nbd.so 00:06:08.856 LIB libspdk_event_nvmf.a 00:06:08.856 SYMLINK libspdk_event_scsi.so 00:06:08.856 SO libspdk_event_nvmf.so.6.0 00:06:09.115 SYMLINK libspdk_event_nvmf.so 00:06:09.399 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:09.399 CC module/event/subsystems/iscsi/iscsi.o 00:06:09.399 LIB libspdk_event_vhost_scsi.a 00:06:09.399 LIB libspdk_event_iscsi.a 00:06:09.399 SO libspdk_event_vhost_scsi.so.3.0 00:06:09.674 SO libspdk_event_iscsi.so.6.0 00:06:09.674 SYMLINK libspdk_event_iscsi.so 00:06:09.674 SYMLINK libspdk_event_vhost_scsi.so 00:06:09.674 SO libspdk.so.6.0 00:06:09.674 SYMLINK libspdk.so 00:06:10.246 CC app/trace_record/trace_record.o 00:06:10.246 CC app/spdk_nvme_identify/identify.o 00:06:10.246 CC app/spdk_nvme_perf/perf.o 00:06:10.246 CXX app/trace/trace.o 00:06:10.246 CC app/spdk_lspci/spdk_lspci.o 00:06:10.246 CC app/nvmf_tgt/nvmf_main.o 00:06:10.246 CC app/iscsi_tgt/iscsi_tgt.o 00:06:10.246 CC app/spdk_tgt/spdk_tgt.o 00:06:10.246 CC examples/util/zipf/zipf.o 00:06:10.246 CC test/thread/poller_perf/poller_perf.o 00:06:10.246 LINK spdk_lspci 00:06:10.246 LINK nvmf_tgt 00:06:10.246 LINK zipf 00:06:10.505 LINK iscsi_tgt 00:06:10.505 LINK spdk_tgt 00:06:10.505 LINK spdk_trace_record 00:06:10.505 LINK poller_perf 00:06:10.505 LINK spdk_trace 00:06:10.764 CC app/spdk_nvme_discover/discovery_aer.o 00:06:10.764 CC app/spdk_top/spdk_top.o 00:06:10.764 TEST_HEADER include/spdk/accel.h 00:06:10.764 TEST_HEADER include/spdk/accel_module.h 00:06:10.764 TEST_HEADER include/spdk/assert.h 00:06:10.764 TEST_HEADER include/spdk/barrier.h 00:06:10.764 TEST_HEADER include/spdk/base64.h 00:06:10.764 TEST_HEADER include/spdk/bdev.h 00:06:10.764 TEST_HEADER include/spdk/bdev_module.h 00:06:10.764 TEST_HEADER include/spdk/bdev_zone.h 00:06:10.764 TEST_HEADER include/spdk/bit_array.h 00:06:10.764 TEST_HEADER include/spdk/bit_pool.h 00:06:10.764 TEST_HEADER include/spdk/blob_bdev.h 00:06:10.764 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:10.764 TEST_HEADER include/spdk/blobfs.h 00:06:10.764 TEST_HEADER include/spdk/blob.h 00:06:10.764 TEST_HEADER include/spdk/conf.h 00:06:10.764 TEST_HEADER include/spdk/config.h 00:06:10.764 TEST_HEADER include/spdk/cpuset.h 00:06:10.764 TEST_HEADER include/spdk/crc16.h 00:06:10.764 TEST_HEADER include/spdk/crc32.h 00:06:10.764 TEST_HEADER include/spdk/crc64.h 00:06:10.764 TEST_HEADER include/spdk/dif.h 00:06:10.764 TEST_HEADER include/spdk/dma.h 00:06:10.764 TEST_HEADER include/spdk/endian.h 00:06:10.764 TEST_HEADER include/spdk/env_dpdk.h 00:06:10.764 TEST_HEADER include/spdk/env.h 00:06:10.764 TEST_HEADER include/spdk/event.h 00:06:10.764 TEST_HEADER include/spdk/fd_group.h 00:06:10.764 TEST_HEADER include/spdk/fd.h 00:06:10.764 TEST_HEADER include/spdk/file.h 00:06:10.764 CC test/dma/test_dma/test_dma.o 00:06:10.764 TEST_HEADER include/spdk/fsdev.h 00:06:10.764 TEST_HEADER include/spdk/fsdev_module.h 00:06:10.764 TEST_HEADER include/spdk/ftl.h 00:06:10.764 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:10.764 CC app/spdk_dd/spdk_dd.o 00:06:10.764 TEST_HEADER include/spdk/gpt_spec.h 00:06:10.764 TEST_HEADER include/spdk/hexlify.h 00:06:10.764 TEST_HEADER include/spdk/histogram_data.h 00:06:10.764 TEST_HEADER include/spdk/idxd.h 00:06:10.764 TEST_HEADER include/spdk/idxd_spec.h 00:06:10.764 CC examples/ioat/perf/perf.o 00:06:10.764 TEST_HEADER include/spdk/init.h 00:06:10.764 TEST_HEADER include/spdk/ioat.h 00:06:10.764 TEST_HEADER include/spdk/ioat_spec.h 00:06:10.764 TEST_HEADER include/spdk/iscsi_spec.h 00:06:10.764 TEST_HEADER include/spdk/json.h 00:06:10.764 TEST_HEADER include/spdk/jsonrpc.h 00:06:10.764 TEST_HEADER include/spdk/keyring.h 00:06:11.023 TEST_HEADER include/spdk/keyring_module.h 00:06:11.023 TEST_HEADER include/spdk/likely.h 00:06:11.023 TEST_HEADER include/spdk/log.h 00:06:11.023 TEST_HEADER include/spdk/lvol.h 00:06:11.023 CC test/app/bdev_svc/bdev_svc.o 00:06:11.023 TEST_HEADER include/spdk/md5.h 00:06:11.023 TEST_HEADER include/spdk/memory.h 00:06:11.023 TEST_HEADER include/spdk/mmio.h 00:06:11.023 TEST_HEADER include/spdk/nbd.h 00:06:11.023 TEST_HEADER include/spdk/net.h 00:06:11.023 TEST_HEADER include/spdk/notify.h 00:06:11.023 TEST_HEADER include/spdk/nvme.h 00:06:11.023 TEST_HEADER include/spdk/nvme_intel.h 00:06:11.023 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:11.023 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:11.023 TEST_HEADER include/spdk/nvme_spec.h 00:06:11.023 TEST_HEADER include/spdk/nvme_zns.h 00:06:11.023 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:11.023 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:11.023 TEST_HEADER include/spdk/nvmf.h 00:06:11.023 TEST_HEADER include/spdk/nvmf_spec.h 00:06:11.023 TEST_HEADER include/spdk/nvmf_transport.h 00:06:11.023 TEST_HEADER include/spdk/opal.h 00:06:11.023 TEST_HEADER include/spdk/opal_spec.h 00:06:11.023 TEST_HEADER include/spdk/pci_ids.h 00:06:11.023 TEST_HEADER include/spdk/pipe.h 00:06:11.023 TEST_HEADER include/spdk/queue.h 00:06:11.023 TEST_HEADER include/spdk/reduce.h 00:06:11.023 TEST_HEADER include/spdk/rpc.h 00:06:11.023 TEST_HEADER include/spdk/scheduler.h 00:06:11.023 TEST_HEADER include/spdk/scsi.h 00:06:11.023 TEST_HEADER include/spdk/scsi_spec.h 00:06:11.023 CC app/fio/nvme/fio_plugin.o 00:06:11.023 TEST_HEADER include/spdk/sock.h 00:06:11.023 TEST_HEADER include/spdk/stdinc.h 00:06:11.023 TEST_HEADER include/spdk/string.h 00:06:11.023 TEST_HEADER include/spdk/thread.h 00:06:11.023 TEST_HEADER include/spdk/trace.h 00:06:11.023 TEST_HEADER include/spdk/trace_parser.h 00:06:11.023 TEST_HEADER include/spdk/tree.h 00:06:11.023 TEST_HEADER include/spdk/ublk.h 00:06:11.023 TEST_HEADER include/spdk/util.h 00:06:11.023 TEST_HEADER include/spdk/uuid.h 00:06:11.023 TEST_HEADER include/spdk/version.h 00:06:11.023 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:11.023 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:11.023 TEST_HEADER include/spdk/vhost.h 00:06:11.023 TEST_HEADER include/spdk/vmd.h 00:06:11.023 TEST_HEADER include/spdk/xor.h 00:06:11.023 TEST_HEADER include/spdk/zipf.h 00:06:11.023 CXX test/cpp_headers/accel.o 00:06:11.023 LINK spdk_nvme_discover 00:06:11.023 LINK bdev_svc 00:06:11.023 LINK ioat_perf 00:06:11.023 LINK spdk_nvme_identify 00:06:11.023 LINK spdk_nvme_perf 00:06:11.282 CXX test/cpp_headers/accel_module.o 00:06:11.282 LINK spdk_dd 00:06:11.282 CC app/fio/bdev/fio_plugin.o 00:06:11.282 CC examples/ioat/verify/verify.o 00:06:11.282 CXX test/cpp_headers/assert.o 00:06:11.540 LINK test_dma 00:06:11.540 CC app/vhost/vhost.o 00:06:11.540 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:11.540 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:11.540 CXX test/cpp_headers/barrier.o 00:06:11.540 LINK verify 00:06:11.540 LINK vhost 00:06:11.540 LINK spdk_nvme 00:06:11.540 LINK interrupt_tgt 00:06:11.800 CC examples/thread/thread/thread_ex.o 00:06:11.800 CXX test/cpp_headers/base64.o 00:06:11.800 LINK spdk_top 00:06:11.800 CC examples/sock/hello_world/hello_sock.o 00:06:11.800 CXX test/cpp_headers/bdev.o 00:06:11.800 CC examples/vmd/lsvmd/lsvmd.o 00:06:11.800 CC examples/idxd/perf/perf.o 00:06:12.059 LINK nvme_fuzz 00:06:12.059 LINK spdk_bdev 00:06:12.059 LINK thread 00:06:12.059 CC test/env/vtophys/vtophys.o 00:06:12.059 LINK lsvmd 00:06:12.059 CC test/event/event_perf/event_perf.o 00:06:12.059 CC test/env/mem_callbacks/mem_callbacks.o 00:06:12.059 CXX test/cpp_headers/bdev_module.o 00:06:12.059 LINK hello_sock 00:06:12.318 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:12.318 LINK vtophys 00:06:12.318 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:12.318 LINK event_perf 00:06:12.318 CC test/env/memory/memory_ut.o 00:06:12.318 LINK idxd_perf 00:06:12.318 CXX test/cpp_headers/bdev_zone.o 00:06:12.318 CC examples/vmd/led/led.o 00:06:12.318 LINK env_dpdk_post_init 00:06:12.577 CC test/env/pci/pci_ut.o 00:06:12.577 LINK led 00:06:12.577 CC test/event/reactor/reactor.o 00:06:12.577 CXX test/cpp_headers/bit_array.o 00:06:12.577 CC test/app/histogram_perf/histogram_perf.o 00:06:12.577 CC examples/accel/perf/accel_perf.o 00:06:12.577 CXX test/cpp_headers/bit_pool.o 00:06:12.577 LINK reactor 00:06:12.577 CXX test/cpp_headers/blob_bdev.o 00:06:12.577 LINK histogram_perf 00:06:12.836 CC test/app/jsoncat/jsoncat.o 00:06:12.836 LINK mem_callbacks 00:06:12.836 CXX test/cpp_headers/blobfs_bdev.o 00:06:12.836 CC test/app/stub/stub.o 00:06:12.836 CC test/event/reactor_perf/reactor_perf.o 00:06:12.836 LINK pci_ut 00:06:12.836 LINK jsoncat 00:06:13.094 CC examples/blob/hello_world/hello_blob.o 00:06:13.095 CC examples/blob/cli/blobcli.o 00:06:13.095 LINK reactor_perf 00:06:13.095 CXX test/cpp_headers/blobfs.o 00:06:13.095 LINK stub 00:06:13.095 LINK accel_perf 00:06:13.095 CXX test/cpp_headers/blob.o 00:06:13.353 LINK hello_blob 00:06:13.353 CXX test/cpp_headers/conf.o 00:06:13.353 CC test/event/app_repeat/app_repeat.o 00:06:13.353 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:13.353 CC examples/nvme/hello_world/hello_world.o 00:06:13.353 CXX test/cpp_headers/config.o 00:06:13.353 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:13.353 LINK app_repeat 00:06:13.353 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:13.612 CXX test/cpp_headers/cpuset.o 00:06:13.612 CC examples/bdev/hello_world/hello_bdev.o 00:06:13.612 LINK memory_ut 00:06:13.612 CC examples/nvme/reconnect/reconnect.o 00:06:13.612 CXX test/cpp_headers/crc16.o 00:06:13.612 LINK hello_world 00:06:13.612 LINK blobcli 00:06:13.871 LINK hello_fsdev 00:06:13.871 CC test/event/scheduler/scheduler.o 00:06:13.871 CXX test/cpp_headers/crc32.o 00:06:13.871 LINK hello_bdev 00:06:13.871 CXX test/cpp_headers/crc64.o 00:06:13.871 CXX test/cpp_headers/dif.o 00:06:13.871 LINK vhost_fuzz 00:06:13.871 CXX test/cpp_headers/dma.o 00:06:14.130 LINK reconnect 00:06:14.130 CXX test/cpp_headers/endian.o 00:06:14.130 LINK scheduler 00:06:14.130 CC test/rpc_client/rpc_client_test.o 00:06:14.130 CC test/nvme/aer/aer.o 00:06:14.130 CC examples/bdev/bdevperf/bdevperf.o 00:06:14.130 CXX test/cpp_headers/env_dpdk.o 00:06:14.130 CC test/accel/dif/dif.o 00:06:14.130 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:14.130 LINK rpc_client_test 00:06:14.130 CC examples/nvme/arbitration/arbitration.o 00:06:14.389 LINK iscsi_fuzz 00:06:14.389 CXX test/cpp_headers/env.o 00:06:14.389 CC test/blobfs/mkfs/mkfs.o 00:06:14.389 LINK aer 00:06:14.389 CC test/lvol/esnap/esnap.o 00:06:14.389 CXX test/cpp_headers/event.o 00:06:14.389 CC test/nvme/reset/reset.o 00:06:14.648 LINK mkfs 00:06:14.648 LINK arbitration 00:06:14.648 CC test/nvme/sgl/sgl.o 00:06:14.648 CXX test/cpp_headers/fd_group.o 00:06:14.648 CC test/nvme/e2edp/nvme_dp.o 00:06:14.648 LINK reset 00:06:14.906 CXX test/cpp_headers/fd.o 00:06:14.906 CC test/nvme/overhead/overhead.o 00:06:14.906 LINK nvme_manage 00:06:14.906 CC examples/nvme/hotplug/hotplug.o 00:06:14.906 LINK sgl 00:06:14.906 CXX test/cpp_headers/file.o 00:06:14.906 LINK dif 00:06:15.164 CC test/nvme/err_injection/err_injection.o 00:06:15.164 LINK bdevperf 00:06:15.164 LINK nvme_dp 00:06:15.164 CXX test/cpp_headers/fsdev.o 00:06:15.165 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:15.165 LINK hotplug 00:06:15.165 LINK overhead 00:06:15.165 CXX test/cpp_headers/fsdev_module.o 00:06:15.165 CC test/nvme/startup/startup.o 00:06:15.165 CXX test/cpp_headers/ftl.o 00:06:15.165 LINK err_injection 00:06:15.423 LINK cmb_copy 00:06:15.423 CC test/nvme/reserve/reserve.o 00:06:15.423 LINK startup 00:06:15.423 CXX test/cpp_headers/fuse_dispatcher.o 00:06:15.423 CC examples/nvme/abort/abort.o 00:06:15.423 CC test/nvme/simple_copy/simple_copy.o 00:06:15.423 CC test/nvme/connect_stress/connect_stress.o 00:06:15.423 CC test/bdev/bdevio/bdevio.o 00:06:15.423 CC test/nvme/boot_partition/boot_partition.o 00:06:15.682 LINK reserve 00:06:15.682 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:15.682 CXX test/cpp_headers/gpt_spec.o 00:06:15.682 LINK connect_stress 00:06:15.682 CC test/nvme/compliance/nvme_compliance.o 00:06:15.682 LINK simple_copy 00:06:15.682 LINK boot_partition 00:06:15.939 CXX test/cpp_headers/hexlify.o 00:06:15.939 LINK pmr_persistence 00:06:15.939 CXX test/cpp_headers/histogram_data.o 00:06:15.939 LINK abort 00:06:15.939 LINK bdevio 00:06:15.939 CXX test/cpp_headers/idxd.o 00:06:15.939 CC test/nvme/fused_ordering/fused_ordering.o 00:06:15.939 CXX test/cpp_headers/idxd_spec.o 00:06:15.939 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:15.939 CC test/nvme/fdp/fdp.o 00:06:16.198 CC test/nvme/cuse/cuse.o 00:06:16.198 LINK nvme_compliance 00:06:16.198 CXX test/cpp_headers/init.o 00:06:16.198 CXX test/cpp_headers/ioat.o 00:06:16.198 CXX test/cpp_headers/ioat_spec.o 00:06:16.198 LINK fused_ordering 00:06:16.198 LINK doorbell_aers 00:06:16.198 CC examples/nvmf/nvmf/nvmf.o 00:06:16.456 CXX test/cpp_headers/iscsi_spec.o 00:06:16.456 CXX test/cpp_headers/json.o 00:06:16.456 CXX test/cpp_headers/jsonrpc.o 00:06:16.456 CXX test/cpp_headers/keyring.o 00:06:16.456 CXX test/cpp_headers/keyring_module.o 00:06:16.456 CXX test/cpp_headers/likely.o 00:06:16.456 LINK fdp 00:06:16.456 CXX test/cpp_headers/log.o 00:06:16.456 CXX test/cpp_headers/lvol.o 00:06:16.456 CXX test/cpp_headers/md5.o 00:06:16.456 CXX test/cpp_headers/memory.o 00:06:16.715 CXX test/cpp_headers/mmio.o 00:06:16.715 CXX test/cpp_headers/nbd.o 00:06:16.715 CXX test/cpp_headers/net.o 00:06:16.715 LINK nvmf 00:06:16.715 CXX test/cpp_headers/notify.o 00:06:16.715 CXX test/cpp_headers/nvme.o 00:06:16.715 CXX test/cpp_headers/nvme_intel.o 00:06:16.715 CXX test/cpp_headers/nvme_ocssd.o 00:06:16.715 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:16.715 CXX test/cpp_headers/nvme_spec.o 00:06:16.715 CXX test/cpp_headers/nvme_zns.o 00:06:16.715 CXX test/cpp_headers/nvmf_cmd.o 00:06:16.974 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:16.974 CXX test/cpp_headers/nvmf.o 00:06:16.974 CXX test/cpp_headers/nvmf_spec.o 00:06:16.974 CXX test/cpp_headers/nvmf_transport.o 00:06:16.974 CXX test/cpp_headers/opal.o 00:06:16.974 CXX test/cpp_headers/opal_spec.o 00:06:16.974 CXX test/cpp_headers/pci_ids.o 00:06:16.974 CXX test/cpp_headers/pipe.o 00:06:16.974 CXX test/cpp_headers/queue.o 00:06:16.974 CXX test/cpp_headers/reduce.o 00:06:16.974 CXX test/cpp_headers/rpc.o 00:06:16.974 CXX test/cpp_headers/scheduler.o 00:06:17.233 CXX test/cpp_headers/scsi.o 00:06:17.233 CXX test/cpp_headers/scsi_spec.o 00:06:17.233 CXX test/cpp_headers/sock.o 00:06:17.233 CXX test/cpp_headers/stdinc.o 00:06:17.233 CXX test/cpp_headers/string.o 00:06:17.233 CXX test/cpp_headers/thread.o 00:06:17.233 CXX test/cpp_headers/trace.o 00:06:17.233 CXX test/cpp_headers/trace_parser.o 00:06:17.233 CXX test/cpp_headers/tree.o 00:06:17.233 CXX test/cpp_headers/ublk.o 00:06:17.233 CXX test/cpp_headers/util.o 00:06:17.492 CXX test/cpp_headers/uuid.o 00:06:17.492 CXX test/cpp_headers/version.o 00:06:17.492 CXX test/cpp_headers/vfio_user_pci.o 00:06:17.492 CXX test/cpp_headers/vfio_user_spec.o 00:06:17.492 CXX test/cpp_headers/vhost.o 00:06:17.492 CXX test/cpp_headers/vmd.o 00:06:17.492 CXX test/cpp_headers/xor.o 00:06:17.492 CXX test/cpp_headers/zipf.o 00:06:17.492 LINK cuse 00:06:20.782 LINK esnap 00:06:21.042 00:06:21.042 real 1m34.863s 00:06:21.042 user 8m29.983s 00:06:21.042 sys 1m45.293s 00:06:21.042 20:19:14 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:21.042 20:19:14 make -- common/autotest_common.sh@10 -- $ set +x 00:06:21.042 ************************************ 00:06:21.042 END TEST make 00:06:21.042 ************************************ 00:06:21.042 20:19:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:21.042 20:19:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:21.042 20:19:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:21.042 20:19:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.042 20:19:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:21.042 20:19:14 -- pm/common@44 -- $ pid=5475 00:06:21.042 20:19:14 -- pm/common@50 -- $ kill -TERM 5475 00:06:21.042 20:19:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.042 20:19:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:21.042 20:19:14 -- pm/common@44 -- $ pid=5477 00:06:21.042 20:19:14 -- pm/common@50 -- $ kill -TERM 5477 00:06:21.042 20:19:14 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:21.042 20:19:14 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:21.302 20:19:14 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.302 20:19:14 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.302 20:19:14 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.302 20:19:14 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.302 20:19:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.302 20:19:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.302 20:19:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.302 20:19:14 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.302 20:19:14 -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.302 20:19:14 -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.302 20:19:14 -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.302 20:19:14 -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.302 20:19:14 -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.302 20:19:14 -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.302 20:19:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.302 20:19:14 -- scripts/common.sh@344 -- # case "$op" in 00:06:21.302 20:19:14 -- scripts/common.sh@345 -- # : 1 00:06:21.302 20:19:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.302 20:19:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.302 20:19:14 -- scripts/common.sh@365 -- # decimal 1 00:06:21.302 20:19:14 -- scripts/common.sh@353 -- # local d=1 00:06:21.302 20:19:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.302 20:19:14 -- scripts/common.sh@355 -- # echo 1 00:06:21.302 20:19:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.302 20:19:14 -- scripts/common.sh@366 -- # decimal 2 00:06:21.302 20:19:14 -- scripts/common.sh@353 -- # local d=2 00:06:21.302 20:19:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.302 20:19:14 -- scripts/common.sh@355 -- # echo 2 00:06:21.302 20:19:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.302 20:19:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.302 20:19:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.302 20:19:14 -- scripts/common.sh@368 -- # return 0 00:06:21.302 20:19:14 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.302 20:19:14 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.302 --rc genhtml_branch_coverage=1 00:06:21.302 --rc genhtml_function_coverage=1 00:06:21.302 --rc genhtml_legend=1 00:06:21.302 --rc geninfo_all_blocks=1 00:06:21.302 --rc geninfo_unexecuted_blocks=1 00:06:21.302 00:06:21.302 ' 00:06:21.302 20:19:14 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.302 --rc genhtml_branch_coverage=1 00:06:21.302 --rc genhtml_function_coverage=1 00:06:21.302 --rc genhtml_legend=1 00:06:21.302 --rc geninfo_all_blocks=1 00:06:21.302 --rc geninfo_unexecuted_blocks=1 00:06:21.302 00:06:21.302 ' 00:06:21.302 20:19:14 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.302 --rc genhtml_branch_coverage=1 00:06:21.302 --rc genhtml_function_coverage=1 00:06:21.302 --rc genhtml_legend=1 00:06:21.303 --rc geninfo_all_blocks=1 00:06:21.303 --rc geninfo_unexecuted_blocks=1 00:06:21.303 00:06:21.303 ' 00:06:21.303 20:19:14 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.303 --rc genhtml_branch_coverage=1 00:06:21.303 --rc genhtml_function_coverage=1 00:06:21.303 --rc genhtml_legend=1 00:06:21.303 --rc geninfo_all_blocks=1 00:06:21.303 --rc geninfo_unexecuted_blocks=1 00:06:21.303 00:06:21.303 ' 00:06:21.303 20:19:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.303 20:19:14 -- nvmf/common.sh@7 -- # uname -s 00:06:21.303 20:19:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.303 20:19:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.303 20:19:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.303 20:19:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.303 20:19:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.303 20:19:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.303 20:19:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.303 20:19:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.303 20:19:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.303 20:19:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.303 20:19:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:87890ee8-f77f-4451-b4c6-6875f86d77cd 00:06:21.303 20:19:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=87890ee8-f77f-4451-b4c6-6875f86d77cd 00:06:21.303 20:19:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.303 20:19:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.303 20:19:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.303 20:19:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.303 20:19:14 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.303 20:19:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:21.303 20:19:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.303 20:19:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.303 20:19:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.303 20:19:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.303 20:19:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.303 20:19:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.303 20:19:14 -- paths/export.sh@5 -- # export PATH 00:06:21.303 20:19:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.303 20:19:14 -- nvmf/common.sh@51 -- # : 0 00:06:21.303 20:19:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:21.303 20:19:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:21.303 20:19:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.303 20:19:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.303 20:19:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.303 20:19:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:21.303 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:21.303 20:19:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:21.303 20:19:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:21.303 20:19:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:21.303 20:19:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:21.303 20:19:14 -- spdk/autotest.sh@32 -- # uname -s 00:06:21.303 20:19:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:21.303 20:19:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:21.303 20:19:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:21.303 20:19:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:21.303 20:19:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:21.303 20:19:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:21.563 20:19:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:21.563 20:19:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:21.563 20:19:14 -- spdk/autotest.sh@48 -- # udevadm_pid=54542 00:06:21.563 20:19:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:21.563 20:19:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:21.563 20:19:14 -- pm/common@17 -- # local monitor 00:06:21.563 20:19:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.563 20:19:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.563 20:19:14 -- pm/common@25 -- # sleep 1 00:06:21.563 20:19:14 -- pm/common@21 -- # date +%s 00:06:21.563 20:19:14 -- pm/common@21 -- # date +%s 00:06:21.563 20:19:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652354 00:06:21.563 20:19:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652354 00:06:21.563 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652354_collect-cpu-load.pm.log 00:06:21.563 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652354_collect-vmstat.pm.log 00:06:22.501 20:19:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:22.501 20:19:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:22.501 20:19:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.501 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:22.501 20:19:15 -- spdk/autotest.sh@59 -- # create_test_list 00:06:22.501 20:19:15 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:22.501 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:22.501 20:19:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:22.501 20:19:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:22.501 20:19:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:22.501 20:19:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:22.501 20:19:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:22.501 20:19:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:22.501 20:19:16 -- common/autotest_common.sh@1457 -- # uname 00:06:22.501 20:19:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:22.501 20:19:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:22.501 20:19:16 -- common/autotest_common.sh@1477 -- # uname 00:06:22.501 20:19:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:22.501 20:19:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:22.501 20:19:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:22.759 lcov: LCOV version 1.15 00:06:22.759 20:19:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:37.629 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:37.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:55.724 20:19:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:55.724 20:19:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.724 20:19:46 -- common/autotest_common.sh@10 -- # set +x 00:06:55.724 20:19:46 -- spdk/autotest.sh@78 -- # rm -f 00:06:55.724 20:19:46 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:55.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.724 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:55.724 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:55.724 20:19:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:55.724 20:19:47 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:55.724 20:19:47 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:55.724 20:19:47 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:55.724 20:19:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.724 20:19:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:55.724 20:19:47 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:55.724 20:19:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.724 20:19:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:55.724 20:19:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:55.724 20:19:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.724 20:19:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:55.724 20:19:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:55.724 20:19:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.724 20:19:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:55.724 20:19:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:55.724 20:19:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:55.724 20:19:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.724 20:19:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:55.724 20:19:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.724 20:19:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.724 20:19:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:55.724 20:19:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:55.724 20:19:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:55.724 No valid GPT data, bailing 00:06:55.724 20:19:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:55.724 20:19:47 -- scripts/common.sh@394 -- # pt= 00:06:55.724 20:19:47 -- scripts/common.sh@395 -- # return 1 00:06:55.724 20:19:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:55.724 1+0 records in 00:06:55.724 1+0 records out 00:06:55.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601485 s, 174 MB/s 00:06:55.724 20:19:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.724 20:19:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.724 20:19:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:55.724 20:19:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:55.724 20:19:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:55.724 No valid GPT data, bailing 00:06:55.724 20:19:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:55.724 20:19:47 -- scripts/common.sh@394 -- # pt= 00:06:55.724 20:19:47 -- scripts/common.sh@395 -- # return 1 00:06:55.724 20:19:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:55.724 1+0 records in 00:06:55.724 1+0 records out 00:06:55.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448633 s, 234 MB/s 00:06:55.724 20:19:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.724 20:19:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.724 20:19:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:55.724 20:19:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:55.724 20:19:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:55.724 No valid GPT data, bailing 00:06:55.724 20:19:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:55.724 20:19:48 -- scripts/common.sh@394 -- # pt= 00:06:55.724 20:19:48 -- scripts/common.sh@395 -- # return 1 00:06:55.724 20:19:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:55.724 1+0 records in 00:06:55.724 1+0 records out 00:06:55.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638377 s, 164 MB/s 00:06:55.724 20:19:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.724 20:19:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.724 20:19:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:55.724 20:19:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:55.724 20:19:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:55.724 No valid GPT data, bailing 00:06:55.724 20:19:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:55.724 20:19:48 -- scripts/common.sh@394 -- # pt= 00:06:55.724 20:19:48 -- scripts/common.sh@395 -- # return 1 00:06:55.724 20:19:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:55.724 1+0 records in 00:06:55.724 1+0 records out 00:06:55.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447636 s, 234 MB/s 00:06:55.724 20:19:48 -- spdk/autotest.sh@105 -- # sync 00:06:55.724 20:19:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:55.724 20:19:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:55.724 20:19:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:57.630 20:19:50 -- spdk/autotest.sh@111 -- # uname -s 00:06:57.630 20:19:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:57.630 20:19:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:57.630 20:19:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:58.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.198 Hugepages 00:06:58.198 node hugesize free / total 00:06:58.198 node0 1048576kB 0 / 0 00:06:58.198 node0 2048kB 0 / 0 00:06:58.198 00:06:58.198 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:58.198 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:58.458 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:58.458 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:58.458 20:19:51 -- spdk/autotest.sh@117 -- # uname -s 00:06:58.458 20:19:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:58.458 20:19:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:58.458 20:19:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:59.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.395 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.395 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.395 20:19:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:00.336 20:19:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:00.336 20:19:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:00.336 20:19:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:00.336 20:19:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:00.336 20:19:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:00.336 20:19:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:00.336 20:19:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:00.336 20:19:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:00.336 20:19:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:00.595 20:19:53 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:00.595 20:19:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:00.595 20:19:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:00.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.853 Waiting for block devices as requested 00:07:01.110 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.110 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.110 20:19:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:01.110 20:19:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:01.110 20:19:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:01.110 20:19:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:01.110 20:19:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:01.110 20:19:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:01.110 20:19:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:01.110 20:19:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:01.110 20:19:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:01.110 20:19:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:01.110 20:19:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:01.110 20:19:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:01.110 20:19:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:01.111 20:19:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:01.111 20:19:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:01.111 20:19:54 -- common/autotest_common.sh@1543 -- # continue 00:07:01.111 20:19:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:01.111 20:19:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:01.111 20:19:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:01.111 20:19:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:01.111 20:19:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:01.111 20:19:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:01.111 20:19:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:01.111 20:19:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:01.111 20:19:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:01.111 20:19:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:01.369 20:19:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:01.369 20:19:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:01.369 20:19:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:01.369 20:19:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:01.369 20:19:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:01.369 20:19:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:01.369 20:19:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:01.369 20:19:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:01.369 20:19:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:01.369 20:19:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:01.369 20:19:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:01.369 20:19:54 -- common/autotest_common.sh@1543 -- # continue 00:07:01.369 20:19:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:01.369 20:19:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.369 20:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:01.369 20:19:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:01.369 20:19:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.369 20:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:01.369 20:19:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:02.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.308 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.308 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.308 20:19:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:02.308 20:19:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.308 20:19:55 -- common/autotest_common.sh@10 -- # set +x 00:07:02.308 20:19:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:02.308 20:19:55 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:02.308 20:19:55 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:02.308 20:19:55 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:02.308 20:19:55 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:02.308 20:19:55 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:02.308 20:19:55 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:02.308 20:19:55 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:02.308 20:19:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:02.308 20:19:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:02.308 20:19:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:02.308 20:19:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:02.308 20:19:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:02.567 20:19:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:02.567 20:19:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:02.567 20:19:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:02.567 20:19:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:02.567 20:19:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:02.567 20:19:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:02.568 20:19:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:02.568 20:19:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:02.568 20:19:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:02.568 20:19:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:02.568 20:19:55 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:02.568 20:19:55 -- common/autotest_common.sh@1572 -- # return 0 00:07:02.568 20:19:55 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:02.568 20:19:55 -- common/autotest_common.sh@1580 -- # return 0 00:07:02.568 20:19:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:02.568 20:19:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:02.568 20:19:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:02.568 20:19:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:02.568 20:19:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:02.568 20:19:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.568 20:19:55 -- common/autotest_common.sh@10 -- # set +x 00:07:02.568 20:19:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:02.568 20:19:55 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:02.568 20:19:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.568 20:19:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.568 20:19:55 -- common/autotest_common.sh@10 -- # set +x 00:07:02.568 ************************************ 00:07:02.568 START TEST env 00:07:02.568 ************************************ 00:07:02.568 20:19:55 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:02.568 * Looking for test storage... 00:07:02.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:02.568 20:19:56 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.568 20:19:56 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.568 20:19:56 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.864 20:19:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.864 20:19:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.864 20:19:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.864 20:19:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.864 20:19:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.864 20:19:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.864 20:19:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.864 20:19:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.864 20:19:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.864 20:19:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.864 20:19:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.864 20:19:56 env -- scripts/common.sh@344 -- # case "$op" in 00:07:02.864 20:19:56 env -- scripts/common.sh@345 -- # : 1 00:07:02.864 20:19:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.864 20:19:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.864 20:19:56 env -- scripts/common.sh@365 -- # decimal 1 00:07:02.864 20:19:56 env -- scripts/common.sh@353 -- # local d=1 00:07:02.864 20:19:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.864 20:19:56 env -- scripts/common.sh@355 -- # echo 1 00:07:02.864 20:19:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.864 20:19:56 env -- scripts/common.sh@366 -- # decimal 2 00:07:02.864 20:19:56 env -- scripts/common.sh@353 -- # local d=2 00:07:02.864 20:19:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.864 20:19:56 env -- scripts/common.sh@355 -- # echo 2 00:07:02.864 20:19:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.864 20:19:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.864 20:19:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.864 20:19:56 env -- scripts/common.sh@368 -- # return 0 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.864 --rc genhtml_branch_coverage=1 00:07:02.864 --rc genhtml_function_coverage=1 00:07:02.864 --rc genhtml_legend=1 00:07:02.864 --rc geninfo_all_blocks=1 00:07:02.864 --rc geninfo_unexecuted_blocks=1 00:07:02.864 00:07:02.864 ' 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.864 --rc genhtml_branch_coverage=1 00:07:02.864 --rc genhtml_function_coverage=1 00:07:02.864 --rc genhtml_legend=1 00:07:02.864 --rc geninfo_all_blocks=1 00:07:02.864 --rc geninfo_unexecuted_blocks=1 00:07:02.864 00:07:02.864 ' 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.864 --rc genhtml_branch_coverage=1 00:07:02.864 --rc genhtml_function_coverage=1 00:07:02.864 --rc genhtml_legend=1 00:07:02.864 --rc geninfo_all_blocks=1 00:07:02.864 --rc geninfo_unexecuted_blocks=1 00:07:02.864 00:07:02.864 ' 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.864 --rc genhtml_branch_coverage=1 00:07:02.864 --rc genhtml_function_coverage=1 00:07:02.864 --rc genhtml_legend=1 00:07:02.864 --rc geninfo_all_blocks=1 00:07:02.864 --rc geninfo_unexecuted_blocks=1 00:07:02.864 00:07:02.864 ' 00:07:02.864 20:19:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.864 20:19:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.864 20:19:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:02.864 ************************************ 00:07:02.864 START TEST env_memory 00:07:02.864 ************************************ 00:07:02.864 20:19:56 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:02.864 00:07:02.864 00:07:02.864 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.864 http://cunit.sourceforge.net/ 00:07:02.864 00:07:02.864 00:07:02.864 Suite: memory 00:07:02.864 Test: alloc and free memory map ...[2024-11-26 20:19:56.232123] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:02.864 passed 00:07:02.864 Test: mem map translation ...[2024-11-26 20:19:56.282231] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:02.864 [2024-11-26 20:19:56.282341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:02.864 [2024-11-26 20:19:56.282432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:02.864 [2024-11-26 20:19:56.282464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:02.864 passed 00:07:02.864 Test: mem map registration ...[2024-11-26 20:19:56.358649] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:02.864 [2024-11-26 20:19:56.358741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:02.864 passed 00:07:03.125 Test: mem map adjacent registrations ...passed 00:07:03.125 00:07:03.125 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.125 suites 1 1 n/a 0 0 00:07:03.125 tests 4 4 4 0 0 00:07:03.125 asserts 152 152 152 0 n/a 00:07:03.125 00:07:03.125 Elapsed time = 0.277 seconds 00:07:03.125 00:07:03.125 real 0m0.329s 00:07:03.125 user 0m0.289s 00:07:03.125 sys 0m0.029s 00:07:03.125 20:19:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.125 20:19:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:03.125 ************************************ 00:07:03.125 END TEST env_memory 00:07:03.125 ************************************ 00:07:03.125 20:19:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:03.125 20:19:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.125 20:19:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.125 20:19:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.125 ************************************ 00:07:03.125 START TEST env_vtophys 00:07:03.125 ************************************ 00:07:03.125 20:19:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:03.125 EAL: lib.eal log level changed from notice to debug 00:07:03.125 EAL: Detected lcore 0 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 1 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 2 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 3 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 4 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 5 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 6 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 7 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 8 as core 0 on socket 0 00:07:03.125 EAL: Detected lcore 9 as core 0 on socket 0 00:07:03.125 EAL: Maximum logical cores by configuration: 128 00:07:03.125 EAL: Detected CPU lcores: 10 00:07:03.125 EAL: Detected NUMA nodes: 1 00:07:03.125 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:03.125 EAL: Detected shared linkage of DPDK 00:07:03.125 EAL: No shared files mode enabled, IPC will be disabled 00:07:03.125 EAL: Selected IOVA mode 'PA' 00:07:03.125 EAL: Probing VFIO support... 00:07:03.125 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:03.125 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:03.125 EAL: Ask a virtual area of 0x2e000 bytes 00:07:03.125 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:03.125 EAL: Setting up physically contiguous memory... 00:07:03.125 EAL: Setting maximum number of open files to 524288 00:07:03.125 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:03.125 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:03.125 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.125 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:03.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.125 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.125 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:03.125 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:03.125 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.125 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:03.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.125 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.125 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:03.125 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:03.125 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.125 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:03.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.125 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.125 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:03.125 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:03.125 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.125 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:03.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.125 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.125 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:03.125 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:03.125 EAL: Hugepages will be freed exactly as allocated. 00:07:03.125 EAL: No shared files mode enabled, IPC is disabled 00:07:03.125 EAL: No shared files mode enabled, IPC is disabled 00:07:03.384 EAL: TSC frequency is ~2290000 KHz 00:07:03.384 EAL: Main lcore 0 is ready (tid=7f16cbaf8a40;cpuset=[0]) 00:07:03.384 EAL: Trying to obtain current memory policy. 00:07:03.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.384 EAL: Restoring previous memory policy: 0 00:07:03.384 EAL: request: mp_malloc_sync 00:07:03.384 EAL: No shared files mode enabled, IPC is disabled 00:07:03.384 EAL: Heap on socket 0 was expanded by 2MB 00:07:03.384 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:03.384 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:03.384 EAL: Mem event callback 'spdk:(nil)' registered 00:07:03.384 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:03.384 00:07:03.384 00:07:03.384 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.384 http://cunit.sourceforge.net/ 00:07:03.384 00:07:03.384 00:07:03.384 Suite: components_suite 00:07:03.642 Test: vtophys_malloc_test ...passed 00:07:03.642 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:03.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.642 EAL: Restoring previous memory policy: 4 00:07:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.642 EAL: request: mp_malloc_sync 00:07:03.642 EAL: No shared files mode enabled, IPC is disabled 00:07:03.642 EAL: Heap on socket 0 was expanded by 4MB 00:07:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.642 EAL: request: mp_malloc_sync 00:07:03.642 EAL: No shared files mode enabled, IPC is disabled 00:07:03.642 EAL: Heap on socket 0 was shrunk by 4MB 00:07:03.642 EAL: Trying to obtain current memory policy. 00:07:03.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.642 EAL: Restoring previous memory policy: 4 00:07:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.642 EAL: request: mp_malloc_sync 00:07:03.642 EAL: No shared files mode enabled, IPC is disabled 00:07:03.642 EAL: Heap on socket 0 was expanded by 6MB 00:07:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.642 EAL: request: mp_malloc_sync 00:07:03.642 EAL: No shared files mode enabled, IPC is disabled 00:07:03.642 EAL: Heap on socket 0 was shrunk by 6MB 00:07:03.642 EAL: Trying to obtain current memory policy. 00:07:03.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.901 EAL: Restoring previous memory policy: 4 00:07:03.901 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.901 EAL: request: mp_malloc_sync 00:07:03.901 EAL: No shared files mode enabled, IPC is disabled 00:07:03.901 EAL: Heap on socket 0 was expanded by 10MB 00:07:03.901 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.901 EAL: request: mp_malloc_sync 00:07:03.901 EAL: No shared files mode enabled, IPC is disabled 00:07:03.901 EAL: Heap on socket 0 was shrunk by 10MB 00:07:03.901 EAL: Trying to obtain current memory policy. 00:07:03.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.901 EAL: Restoring previous memory policy: 4 00:07:03.901 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.901 EAL: request: mp_malloc_sync 00:07:03.901 EAL: No shared files mode enabled, IPC is disabled 00:07:03.901 EAL: Heap on socket 0 was expanded by 18MB 00:07:03.901 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.901 EAL: request: mp_malloc_sync 00:07:03.901 EAL: No shared files mode enabled, IPC is disabled 00:07:03.901 EAL: Heap on socket 0 was shrunk by 18MB 00:07:03.901 EAL: Trying to obtain current memory policy. 00:07:03.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.901 EAL: Restoring previous memory policy: 4 00:07:03.901 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.901 EAL: request: mp_malloc_sync 00:07:03.901 EAL: No shared files mode enabled, IPC is disabled 00:07:03.901 EAL: Heap on socket 0 was expanded by 34MB 00:07:03.901 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.901 EAL: request: mp_malloc_sync 00:07:03.901 EAL: No shared files mode enabled, IPC is disabled 00:07:03.901 EAL: Heap on socket 0 was shrunk by 34MB 00:07:04.160 EAL: Trying to obtain current memory policy. 00:07:04.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.160 EAL: Restoring previous memory policy: 4 00:07:04.160 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.160 EAL: request: mp_malloc_sync 00:07:04.160 EAL: No shared files mode enabled, IPC is disabled 00:07:04.160 EAL: Heap on socket 0 was expanded by 66MB 00:07:04.160 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.160 EAL: request: mp_malloc_sync 00:07:04.160 EAL: No shared files mode enabled, IPC is disabled 00:07:04.160 EAL: Heap on socket 0 was shrunk by 66MB 00:07:04.419 EAL: Trying to obtain current memory policy. 00:07:04.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.419 EAL: Restoring previous memory policy: 4 00:07:04.419 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.419 EAL: request: mp_malloc_sync 00:07:04.419 EAL: No shared files mode enabled, IPC is disabled 00:07:04.419 EAL: Heap on socket 0 was expanded by 130MB 00:07:04.678 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.678 EAL: request: mp_malloc_sync 00:07:04.678 EAL: No shared files mode enabled, IPC is disabled 00:07:04.678 EAL: Heap on socket 0 was shrunk by 130MB 00:07:04.937 EAL: Trying to obtain current memory policy. 00:07:04.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.937 EAL: Restoring previous memory policy: 4 00:07:04.937 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.937 EAL: request: mp_malloc_sync 00:07:04.937 EAL: No shared files mode enabled, IPC is disabled 00:07:04.937 EAL: Heap on socket 0 was expanded by 258MB 00:07:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:07:05.504 EAL: request: mp_malloc_sync 00:07:05.504 EAL: No shared files mode enabled, IPC is disabled 00:07:05.504 EAL: Heap on socket 0 was shrunk by 258MB 00:07:06.073 EAL: Trying to obtain current memory policy. 00:07:06.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:06.073 EAL: Restoring previous memory policy: 4 00:07:06.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:06.073 EAL: request: mp_malloc_sync 00:07:06.073 EAL: No shared files mode enabled, IPC is disabled 00:07:06.073 EAL: Heap on socket 0 was expanded by 514MB 00:07:07.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.454 EAL: request: mp_malloc_sync 00:07:07.454 EAL: No shared files mode enabled, IPC is disabled 00:07:07.454 EAL: Heap on socket 0 was shrunk by 514MB 00:07:08.391 EAL: Trying to obtain current memory policy. 00:07:08.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.650 EAL: Restoring previous memory policy: 4 00:07:08.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.650 EAL: request: mp_malloc_sync 00:07:08.650 EAL: No shared files mode enabled, IPC is disabled 00:07:08.650 EAL: Heap on socket 0 was expanded by 1026MB 00:07:10.560 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.820 EAL: request: mp_malloc_sync 00:07:10.820 EAL: No shared files mode enabled, IPC is disabled 00:07:10.820 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:12.730 passed 00:07:12.730 00:07:12.730 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.730 suites 1 1 n/a 0 0 00:07:12.730 tests 2 2 2 0 0 00:07:12.730 asserts 5656 5656 5656 0 n/a 00:07:12.730 00:07:12.730 Elapsed time = 9.214 seconds 00:07:12.730 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.730 EAL: request: mp_malloc_sync 00:07:12.730 EAL: No shared files mode enabled, IPC is disabled 00:07:12.730 EAL: Heap on socket 0 was shrunk by 2MB 00:07:12.730 EAL: No shared files mode enabled, IPC is disabled 00:07:12.730 EAL: No shared files mode enabled, IPC is disabled 00:07:12.730 EAL: No shared files mode enabled, IPC is disabled 00:07:12.730 00:07:12.730 real 0m9.554s 00:07:12.730 user 0m8.515s 00:07:12.730 sys 0m0.873s 00:07:12.730 20:20:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.730 20:20:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:12.730 ************************************ 00:07:12.730 END TEST env_vtophys 00:07:12.730 ************************************ 00:07:12.730 20:20:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:12.730 20:20:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.730 20:20:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.730 20:20:06 env -- common/autotest_common.sh@10 -- # set +x 00:07:12.730 ************************************ 00:07:12.730 START TEST env_pci 00:07:12.730 ************************************ 00:07:12.730 20:20:06 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:12.730 00:07:12.730 00:07:12.730 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.730 http://cunit.sourceforge.net/ 00:07:12.730 00:07:12.730 00:07:12.730 Suite: pci 00:07:12.730 Test: pci_hook ...[2024-11-26 20:20:06.208455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56868 has claimed it 00:07:12.730 passed 00:07:12.730 00:07:12.730 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.730 suites 1 1 n/a 0 0 00:07:12.730 tests 1 1 1 0 0 00:07:12.730 asserts 25 25 25 0 n/a 00:07:12.730 00:07:12.730 Elapsed time = 0.010 secondsEAL: Cannot find device (10000:00:01.0) 00:07:12.730 EAL: Failed to attach device on primary process 00:07:12.730 00:07:12.730 00:07:12.730 real 0m0.115s 00:07:12.730 user 0m0.058s 00:07:12.730 sys 0m0.056s 00:07:12.730 20:20:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.730 20:20:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:12.730 ************************************ 00:07:12.730 END TEST env_pci 00:07:12.730 ************************************ 00:07:12.990 20:20:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:12.990 20:20:06 env -- env/env.sh@15 -- # uname 00:07:12.990 20:20:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:12.990 20:20:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:12.990 20:20:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:12.990 20:20:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:12.990 20:20:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.990 20:20:06 env -- common/autotest_common.sh@10 -- # set +x 00:07:12.990 ************************************ 00:07:12.990 START TEST env_dpdk_post_init 00:07:12.990 ************************************ 00:07:12.990 20:20:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:12.990 EAL: Detected CPU lcores: 10 00:07:12.990 EAL: Detected NUMA nodes: 1 00:07:12.990 EAL: Detected shared linkage of DPDK 00:07:12.990 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:12.990 EAL: Selected IOVA mode 'PA' 00:07:13.250 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:13.250 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:13.250 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:13.250 Starting DPDK initialization... 00:07:13.250 Starting SPDK post initialization... 00:07:13.250 SPDK NVMe probe 00:07:13.250 Attaching to 0000:00:10.0 00:07:13.250 Attaching to 0000:00:11.0 00:07:13.250 Attached to 0000:00:10.0 00:07:13.250 Attached to 0000:00:11.0 00:07:13.250 Cleaning up... 00:07:13.250 00:07:13.250 real 0m0.296s 00:07:13.250 user 0m0.103s 00:07:13.250 sys 0m0.094s 00:07:13.250 20:20:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.250 20:20:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:13.250 ************************************ 00:07:13.250 END TEST env_dpdk_post_init 00:07:13.250 ************************************ 00:07:13.250 20:20:06 env -- env/env.sh@26 -- # uname 00:07:13.250 20:20:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:13.250 20:20:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:13.250 20:20:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.250 20:20:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.250 20:20:06 env -- common/autotest_common.sh@10 -- # set +x 00:07:13.250 ************************************ 00:07:13.250 START TEST env_mem_callbacks 00:07:13.250 ************************************ 00:07:13.250 20:20:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:13.250 EAL: Detected CPU lcores: 10 00:07:13.250 EAL: Detected NUMA nodes: 1 00:07:13.250 EAL: Detected shared linkage of DPDK 00:07:13.250 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:13.250 EAL: Selected IOVA mode 'PA' 00:07:13.509 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:13.509 00:07:13.509 00:07:13.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.509 http://cunit.sourceforge.net/ 00:07:13.509 00:07:13.509 00:07:13.509 Suite: memory 00:07:13.509 Test: test ... 00:07:13.509 register 0x200000200000 2097152 00:07:13.509 malloc 3145728 00:07:13.509 register 0x200000400000 4194304 00:07:13.509 buf 0x2000004fffc0 len 3145728 PASSED 00:07:13.509 malloc 64 00:07:13.509 buf 0x2000004ffec0 len 64 PASSED 00:07:13.509 malloc 4194304 00:07:13.509 register 0x200000800000 6291456 00:07:13.509 buf 0x2000009fffc0 len 4194304 PASSED 00:07:13.509 free 0x2000004fffc0 3145728 00:07:13.509 free 0x2000004ffec0 64 00:07:13.509 unregister 0x200000400000 4194304 PASSED 00:07:13.509 free 0x2000009fffc0 4194304 00:07:13.509 unregister 0x200000800000 6291456 PASSED 00:07:13.509 malloc 8388608 00:07:13.509 register 0x200000400000 10485760 00:07:13.509 buf 0x2000005fffc0 len 8388608 PASSED 00:07:13.509 free 0x2000005fffc0 8388608 00:07:13.509 unregister 0x200000400000 10485760 PASSED 00:07:13.509 passed 00:07:13.509 00:07:13.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.509 suites 1 1 n/a 0 0 00:07:13.509 tests 1 1 1 0 0 00:07:13.509 asserts 15 15 15 0 n/a 00:07:13.509 00:07:13.509 Elapsed time = 0.088 seconds 00:07:13.509 00:07:13.509 real 0m0.292s 00:07:13.509 user 0m0.126s 00:07:13.509 sys 0m0.063s 00:07:13.509 20:20:07 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.509 20:20:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:13.509 ************************************ 00:07:13.510 END TEST env_mem_callbacks 00:07:13.510 ************************************ 00:07:13.510 00:07:13.510 real 0m11.114s 00:07:13.510 user 0m9.285s 00:07:13.510 sys 0m1.466s 00:07:13.510 20:20:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.510 20:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 ************************************ 00:07:13.510 END TEST env 00:07:13.510 ************************************ 00:07:13.776 20:20:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:13.776 20:20:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.776 20:20:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.776 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:07:13.776 ************************************ 00:07:13.776 START TEST rpc 00:07:13.776 ************************************ 00:07:13.776 20:20:07 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:13.776 * Looking for test storage... 00:07:13.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:13.776 20:20:07 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.776 20:20:07 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.776 20:20:07 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.045 20:20:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.045 20:20:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.045 20:20:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.045 20:20:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.045 20:20:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.045 20:20:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:14.045 20:20:07 rpc -- scripts/common.sh@345 -- # : 1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.045 20:20:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.045 20:20:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@353 -- # local d=1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.045 20:20:07 rpc -- scripts/common.sh@355 -- # echo 1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.045 20:20:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@353 -- # local d=2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.045 20:20:07 rpc -- scripts/common.sh@355 -- # echo 2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.045 20:20:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.045 20:20:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.045 20:20:07 rpc -- scripts/common.sh@368 -- # return 0 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.045 --rc genhtml_branch_coverage=1 00:07:14.045 --rc genhtml_function_coverage=1 00:07:14.045 --rc genhtml_legend=1 00:07:14.045 --rc geninfo_all_blocks=1 00:07:14.045 --rc geninfo_unexecuted_blocks=1 00:07:14.045 00:07:14.045 ' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.045 --rc genhtml_branch_coverage=1 00:07:14.045 --rc genhtml_function_coverage=1 00:07:14.045 --rc genhtml_legend=1 00:07:14.045 --rc geninfo_all_blocks=1 00:07:14.045 --rc geninfo_unexecuted_blocks=1 00:07:14.045 00:07:14.045 ' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.045 --rc genhtml_branch_coverage=1 00:07:14.045 --rc genhtml_function_coverage=1 00:07:14.045 --rc genhtml_legend=1 00:07:14.045 --rc geninfo_all_blocks=1 00:07:14.045 --rc geninfo_unexecuted_blocks=1 00:07:14.045 00:07:14.045 ' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.045 --rc genhtml_branch_coverage=1 00:07:14.045 --rc genhtml_function_coverage=1 00:07:14.045 --rc genhtml_legend=1 00:07:14.045 --rc geninfo_all_blocks=1 00:07:14.045 --rc geninfo_unexecuted_blocks=1 00:07:14.045 00:07:14.045 ' 00:07:14.045 20:20:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:14.045 20:20:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57001 00:07:14.045 20:20:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:14.045 20:20:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57001 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 57001 ']' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.045 20:20:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.045 [2024-11-26 20:20:07.464326] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:14.045 [2024-11-26 20:20:07.464459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57001 ] 00:07:14.304 [2024-11-26 20:20:07.632416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.304 [2024-11-26 20:20:07.765267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:14.304 [2024-11-26 20:20:07.765527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57001' to capture a snapshot of events at runtime. 00:07:14.304 [2024-11-26 20:20:07.765594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.304 [2024-11-26 20:20:07.765649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.304 [2024-11-26 20:20:07.765704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57001 for offline analysis/debug. 00:07:14.304 [2024-11-26 20:20:07.767214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.243 20:20:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.243 20:20:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:15.243 20:20:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:15.243 20:20:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:15.243 20:20:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:15.243 20:20:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:15.243 20:20:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.243 20:20:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.243 20:20:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.243 ************************************ 00:07:15.243 START TEST rpc_integrity 00:07:15.243 ************************************ 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:15.243 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.243 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.502 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.502 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:15.502 { 00:07:15.502 "name": "Malloc0", 00:07:15.502 "aliases": [ 00:07:15.502 "630ba57d-6439-4558-95f1-a80e6e983edf" 00:07:15.502 ], 00:07:15.502 "product_name": "Malloc disk", 00:07:15.502 "block_size": 512, 00:07:15.502 "num_blocks": 16384, 00:07:15.502 "uuid": "630ba57d-6439-4558-95f1-a80e6e983edf", 00:07:15.502 "assigned_rate_limits": { 00:07:15.502 "rw_ios_per_sec": 0, 00:07:15.502 "rw_mbytes_per_sec": 0, 00:07:15.502 "r_mbytes_per_sec": 0, 00:07:15.502 "w_mbytes_per_sec": 0 00:07:15.502 }, 00:07:15.503 "claimed": false, 00:07:15.503 "zoned": false, 00:07:15.503 "supported_io_types": { 00:07:15.503 "read": true, 00:07:15.503 "write": true, 00:07:15.503 "unmap": true, 00:07:15.503 "flush": true, 00:07:15.503 "reset": true, 00:07:15.503 "nvme_admin": false, 00:07:15.503 "nvme_io": false, 00:07:15.503 "nvme_io_md": false, 00:07:15.503 "write_zeroes": true, 00:07:15.503 "zcopy": true, 00:07:15.503 "get_zone_info": false, 00:07:15.503 "zone_management": false, 00:07:15.503 "zone_append": false, 00:07:15.503 "compare": false, 00:07:15.503 "compare_and_write": false, 00:07:15.503 "abort": true, 00:07:15.503 "seek_hole": false, 00:07:15.503 "seek_data": false, 00:07:15.503 "copy": true, 00:07:15.503 "nvme_iov_md": false 00:07:15.503 }, 00:07:15.503 "memory_domains": [ 00:07:15.503 { 00:07:15.503 "dma_device_id": "system", 00:07:15.503 "dma_device_type": 1 00:07:15.503 }, 00:07:15.503 { 00:07:15.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.503 "dma_device_type": 2 00:07:15.503 } 00:07:15.503 ], 00:07:15.503 "driver_specific": {} 00:07:15.503 } 00:07:15.503 ]' 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 [2024-11-26 20:20:08.858005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:15.503 [2024-11-26 20:20:08.858466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.503 [2024-11-26 20:20:08.858572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:15.503 [2024-11-26 20:20:08.858633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.503 [2024-11-26 20:20:08.861156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.503 [2024-11-26 20:20:08.861320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:15.503 Passthru0 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:15.503 { 00:07:15.503 "name": "Malloc0", 00:07:15.503 "aliases": [ 00:07:15.503 "630ba57d-6439-4558-95f1-a80e6e983edf" 00:07:15.503 ], 00:07:15.503 "product_name": "Malloc disk", 00:07:15.503 "block_size": 512, 00:07:15.503 "num_blocks": 16384, 00:07:15.503 "uuid": "630ba57d-6439-4558-95f1-a80e6e983edf", 00:07:15.503 "assigned_rate_limits": { 00:07:15.503 "rw_ios_per_sec": 0, 00:07:15.503 "rw_mbytes_per_sec": 0, 00:07:15.503 "r_mbytes_per_sec": 0, 00:07:15.503 "w_mbytes_per_sec": 0 00:07:15.503 }, 00:07:15.503 "claimed": true, 00:07:15.503 "claim_type": "exclusive_write", 00:07:15.503 "zoned": false, 00:07:15.503 "supported_io_types": { 00:07:15.503 "read": true, 00:07:15.503 "write": true, 00:07:15.503 "unmap": true, 00:07:15.503 "flush": true, 00:07:15.503 "reset": true, 00:07:15.503 "nvme_admin": false, 00:07:15.503 "nvme_io": false, 00:07:15.503 "nvme_io_md": false, 00:07:15.503 "write_zeroes": true, 00:07:15.503 "zcopy": true, 00:07:15.503 "get_zone_info": false, 00:07:15.503 "zone_management": false, 00:07:15.503 "zone_append": false, 00:07:15.503 "compare": false, 00:07:15.503 "compare_and_write": false, 00:07:15.503 "abort": true, 00:07:15.503 "seek_hole": false, 00:07:15.503 "seek_data": false, 00:07:15.503 "copy": true, 00:07:15.503 "nvme_iov_md": false 00:07:15.503 }, 00:07:15.503 "memory_domains": [ 00:07:15.503 { 00:07:15.503 "dma_device_id": "system", 00:07:15.503 "dma_device_type": 1 00:07:15.503 }, 00:07:15.503 { 00:07:15.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.503 "dma_device_type": 2 00:07:15.503 } 00:07:15.503 ], 00:07:15.503 "driver_specific": {} 00:07:15.503 }, 00:07:15.503 { 00:07:15.503 "name": "Passthru0", 00:07:15.503 "aliases": [ 00:07:15.503 "503b1e8a-12ee-5395-ba28-c1523b388d60" 00:07:15.503 ], 00:07:15.503 "product_name": "passthru", 00:07:15.503 "block_size": 512, 00:07:15.503 "num_blocks": 16384, 00:07:15.503 "uuid": "503b1e8a-12ee-5395-ba28-c1523b388d60", 00:07:15.503 "assigned_rate_limits": { 00:07:15.503 "rw_ios_per_sec": 0, 00:07:15.503 "rw_mbytes_per_sec": 0, 00:07:15.503 "r_mbytes_per_sec": 0, 00:07:15.503 "w_mbytes_per_sec": 0 00:07:15.503 }, 00:07:15.503 "claimed": false, 00:07:15.503 "zoned": false, 00:07:15.503 "supported_io_types": { 00:07:15.503 "read": true, 00:07:15.503 "write": true, 00:07:15.503 "unmap": true, 00:07:15.503 "flush": true, 00:07:15.503 "reset": true, 00:07:15.503 "nvme_admin": false, 00:07:15.503 "nvme_io": false, 00:07:15.503 "nvme_io_md": false, 00:07:15.503 "write_zeroes": true, 00:07:15.503 "zcopy": true, 00:07:15.503 "get_zone_info": false, 00:07:15.503 "zone_management": false, 00:07:15.503 "zone_append": false, 00:07:15.503 "compare": false, 00:07:15.503 "compare_and_write": false, 00:07:15.503 "abort": true, 00:07:15.503 "seek_hole": false, 00:07:15.503 "seek_data": false, 00:07:15.503 "copy": true, 00:07:15.503 "nvme_iov_md": false 00:07:15.503 }, 00:07:15.503 "memory_domains": [ 00:07:15.503 { 00:07:15.503 "dma_device_id": "system", 00:07:15.503 "dma_device_type": 1 00:07:15.503 }, 00:07:15.503 { 00:07:15.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.503 "dma_device_type": 2 00:07:15.503 } 00:07:15.503 ], 00:07:15.503 "driver_specific": { 00:07:15.503 "passthru": { 00:07:15.503 "name": "Passthru0", 00:07:15.503 "base_bdev_name": "Malloc0" 00:07:15.503 } 00:07:15.503 } 00:07:15.503 } 00:07:15.503 ]' 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 20:20:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:15.503 20:20:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:15.503 20:20:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:15.503 00:07:15.503 real 0m0.334s 00:07:15.503 user 0m0.184s 00:07:15.503 sys 0m0.049s 00:07:15.503 20:20:09 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.503 20:20:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 ************************************ 00:07:15.503 END TEST rpc_integrity 00:07:15.503 ************************************ 00:07:15.767 20:20:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:15.767 20:20:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.767 20:20:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.767 20:20:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 ************************************ 00:07:15.767 START TEST rpc_plugins 00:07:15.767 ************************************ 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:15.767 { 00:07:15.767 "name": "Malloc1", 00:07:15.767 "aliases": [ 00:07:15.767 "aeaaa322-2aa8-4fd0-93dd-bbfc1062904a" 00:07:15.767 ], 00:07:15.767 "product_name": "Malloc disk", 00:07:15.767 "block_size": 4096, 00:07:15.767 "num_blocks": 256, 00:07:15.767 "uuid": "aeaaa322-2aa8-4fd0-93dd-bbfc1062904a", 00:07:15.767 "assigned_rate_limits": { 00:07:15.767 "rw_ios_per_sec": 0, 00:07:15.767 "rw_mbytes_per_sec": 0, 00:07:15.767 "r_mbytes_per_sec": 0, 00:07:15.767 "w_mbytes_per_sec": 0 00:07:15.767 }, 00:07:15.767 "claimed": false, 00:07:15.767 "zoned": false, 00:07:15.767 "supported_io_types": { 00:07:15.767 "read": true, 00:07:15.767 "write": true, 00:07:15.767 "unmap": true, 00:07:15.767 "flush": true, 00:07:15.767 "reset": true, 00:07:15.767 "nvme_admin": false, 00:07:15.767 "nvme_io": false, 00:07:15.767 "nvme_io_md": false, 00:07:15.767 "write_zeroes": true, 00:07:15.767 "zcopy": true, 00:07:15.767 "get_zone_info": false, 00:07:15.767 "zone_management": false, 00:07:15.767 "zone_append": false, 00:07:15.767 "compare": false, 00:07:15.767 "compare_and_write": false, 00:07:15.767 "abort": true, 00:07:15.767 "seek_hole": false, 00:07:15.767 "seek_data": false, 00:07:15.767 "copy": true, 00:07:15.767 "nvme_iov_md": false 00:07:15.767 }, 00:07:15.767 "memory_domains": [ 00:07:15.767 { 00:07:15.767 "dma_device_id": "system", 00:07:15.767 "dma_device_type": 1 00:07:15.767 }, 00:07:15.767 { 00:07:15.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.767 "dma_device_type": 2 00:07:15.767 } 00:07:15.767 ], 00:07:15.767 "driver_specific": {} 00:07:15.767 } 00:07:15.767 ]' 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:15.767 20:20:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:15.767 00:07:15.767 real 0m0.175s 00:07:15.767 user 0m0.100s 00:07:15.767 sys 0m0.018s 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.767 20:20:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 ************************************ 00:07:15.767 END TEST rpc_plugins 00:07:15.767 ************************************ 00:07:16.028 20:20:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:16.028 20:20:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.028 20:20:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.028 20:20:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.028 ************************************ 00:07:16.028 START TEST rpc_trace_cmd_test 00:07:16.028 ************************************ 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:16.028 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57001", 00:07:16.028 "tpoint_group_mask": "0x8", 00:07:16.028 "iscsi_conn": { 00:07:16.028 "mask": "0x2", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "scsi": { 00:07:16.028 "mask": "0x4", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "bdev": { 00:07:16.028 "mask": "0x8", 00:07:16.028 "tpoint_mask": "0xffffffffffffffff" 00:07:16.028 }, 00:07:16.028 "nvmf_rdma": { 00:07:16.028 "mask": "0x10", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "nvmf_tcp": { 00:07:16.028 "mask": "0x20", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "ftl": { 00:07:16.028 "mask": "0x40", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "blobfs": { 00:07:16.028 "mask": "0x80", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "dsa": { 00:07:16.028 "mask": "0x200", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "thread": { 00:07:16.028 "mask": "0x400", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "nvme_pcie": { 00:07:16.028 "mask": "0x800", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "iaa": { 00:07:16.028 "mask": "0x1000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "nvme_tcp": { 00:07:16.028 "mask": "0x2000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "bdev_nvme": { 00:07:16.028 "mask": "0x4000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "sock": { 00:07:16.028 "mask": "0x8000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "blob": { 00:07:16.028 "mask": "0x10000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "bdev_raid": { 00:07:16.028 "mask": "0x20000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 }, 00:07:16.028 "scheduler": { 00:07:16.028 "mask": "0x40000", 00:07:16.028 "tpoint_mask": "0x0" 00:07:16.028 } 00:07:16.028 }' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:16.028 00:07:16.028 real 0m0.238s 00:07:16.028 user 0m0.192s 00:07:16.028 sys 0m0.037s 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.028 20:20:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.028 ************************************ 00:07:16.028 END TEST rpc_trace_cmd_test 00:07:16.028 ************************************ 00:07:16.288 20:20:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:16.288 20:20:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:16.288 20:20:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:16.288 20:20:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.288 20:20:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.288 20:20:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 ************************************ 00:07:16.288 START TEST rpc_daemon_integrity 00:07:16.288 ************************************ 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:16.288 { 00:07:16.288 "name": "Malloc2", 00:07:16.288 "aliases": [ 00:07:16.288 "9cb08fb0-e0f1-4f9f-b545-533ab9d83bb1" 00:07:16.288 ], 00:07:16.288 "product_name": "Malloc disk", 00:07:16.288 "block_size": 512, 00:07:16.288 "num_blocks": 16384, 00:07:16.288 "uuid": "9cb08fb0-e0f1-4f9f-b545-533ab9d83bb1", 00:07:16.288 "assigned_rate_limits": { 00:07:16.288 "rw_ios_per_sec": 0, 00:07:16.288 "rw_mbytes_per_sec": 0, 00:07:16.288 "r_mbytes_per_sec": 0, 00:07:16.288 "w_mbytes_per_sec": 0 00:07:16.288 }, 00:07:16.288 "claimed": false, 00:07:16.288 "zoned": false, 00:07:16.288 "supported_io_types": { 00:07:16.288 "read": true, 00:07:16.288 "write": true, 00:07:16.288 "unmap": true, 00:07:16.288 "flush": true, 00:07:16.288 "reset": true, 00:07:16.288 "nvme_admin": false, 00:07:16.288 "nvme_io": false, 00:07:16.288 "nvme_io_md": false, 00:07:16.288 "write_zeroes": true, 00:07:16.288 "zcopy": true, 00:07:16.288 "get_zone_info": false, 00:07:16.288 "zone_management": false, 00:07:16.288 "zone_append": false, 00:07:16.288 "compare": false, 00:07:16.288 "compare_and_write": false, 00:07:16.288 "abort": true, 00:07:16.288 "seek_hole": false, 00:07:16.288 "seek_data": false, 00:07:16.288 "copy": true, 00:07:16.288 "nvme_iov_md": false 00:07:16.288 }, 00:07:16.288 "memory_domains": [ 00:07:16.288 { 00:07:16.288 "dma_device_id": "system", 00:07:16.288 "dma_device_type": 1 00:07:16.288 }, 00:07:16.288 { 00:07:16.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.288 "dma_device_type": 2 00:07:16.288 } 00:07:16.288 ], 00:07:16.288 "driver_specific": {} 00:07:16.288 } 00:07:16.288 ]' 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 [2024-11-26 20:20:09.789176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:16.288 [2024-11-26 20:20:09.789451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.288 [2024-11-26 20:20:09.789540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:16.288 [2024-11-26 20:20:09.789601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.288 [2024-11-26 20:20:09.792205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.288 [2024-11-26 20:20:09.792356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:16.288 Passthru0 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:16.288 { 00:07:16.288 "name": "Malloc2", 00:07:16.288 "aliases": [ 00:07:16.288 "9cb08fb0-e0f1-4f9f-b545-533ab9d83bb1" 00:07:16.288 ], 00:07:16.288 "product_name": "Malloc disk", 00:07:16.288 "block_size": 512, 00:07:16.288 "num_blocks": 16384, 00:07:16.288 "uuid": "9cb08fb0-e0f1-4f9f-b545-533ab9d83bb1", 00:07:16.288 "assigned_rate_limits": { 00:07:16.288 "rw_ios_per_sec": 0, 00:07:16.288 "rw_mbytes_per_sec": 0, 00:07:16.288 "r_mbytes_per_sec": 0, 00:07:16.288 "w_mbytes_per_sec": 0 00:07:16.288 }, 00:07:16.288 "claimed": true, 00:07:16.288 "claim_type": "exclusive_write", 00:07:16.288 "zoned": false, 00:07:16.288 "supported_io_types": { 00:07:16.288 "read": true, 00:07:16.288 "write": true, 00:07:16.288 "unmap": true, 00:07:16.288 "flush": true, 00:07:16.288 "reset": true, 00:07:16.288 "nvme_admin": false, 00:07:16.288 "nvme_io": false, 00:07:16.288 "nvme_io_md": false, 00:07:16.288 "write_zeroes": true, 00:07:16.288 "zcopy": true, 00:07:16.288 "get_zone_info": false, 00:07:16.288 "zone_management": false, 00:07:16.288 "zone_append": false, 00:07:16.288 "compare": false, 00:07:16.288 "compare_and_write": false, 00:07:16.288 "abort": true, 00:07:16.288 "seek_hole": false, 00:07:16.288 "seek_data": false, 00:07:16.288 "copy": true, 00:07:16.288 "nvme_iov_md": false 00:07:16.288 }, 00:07:16.288 "memory_domains": [ 00:07:16.288 { 00:07:16.288 "dma_device_id": "system", 00:07:16.288 "dma_device_type": 1 00:07:16.288 }, 00:07:16.288 { 00:07:16.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.288 "dma_device_type": 2 00:07:16.288 } 00:07:16.288 ], 00:07:16.288 "driver_specific": {} 00:07:16.288 }, 00:07:16.288 { 00:07:16.288 "name": "Passthru0", 00:07:16.288 "aliases": [ 00:07:16.288 "8bdd4d37-898a-5874-b40a-51f8af3a2ec7" 00:07:16.288 ], 00:07:16.288 "product_name": "passthru", 00:07:16.288 "block_size": 512, 00:07:16.288 "num_blocks": 16384, 00:07:16.288 "uuid": "8bdd4d37-898a-5874-b40a-51f8af3a2ec7", 00:07:16.288 "assigned_rate_limits": { 00:07:16.288 "rw_ios_per_sec": 0, 00:07:16.288 "rw_mbytes_per_sec": 0, 00:07:16.288 "r_mbytes_per_sec": 0, 00:07:16.288 "w_mbytes_per_sec": 0 00:07:16.288 }, 00:07:16.288 "claimed": false, 00:07:16.288 "zoned": false, 00:07:16.288 "supported_io_types": { 00:07:16.288 "read": true, 00:07:16.288 "write": true, 00:07:16.288 "unmap": true, 00:07:16.288 "flush": true, 00:07:16.288 "reset": true, 00:07:16.288 "nvme_admin": false, 00:07:16.288 "nvme_io": false, 00:07:16.288 "nvme_io_md": false, 00:07:16.288 "write_zeroes": true, 00:07:16.288 "zcopy": true, 00:07:16.288 "get_zone_info": false, 00:07:16.288 "zone_management": false, 00:07:16.288 "zone_append": false, 00:07:16.288 "compare": false, 00:07:16.288 "compare_and_write": false, 00:07:16.288 "abort": true, 00:07:16.288 "seek_hole": false, 00:07:16.288 "seek_data": false, 00:07:16.288 "copy": true, 00:07:16.288 "nvme_iov_md": false 00:07:16.288 }, 00:07:16.288 "memory_domains": [ 00:07:16.288 { 00:07:16.288 "dma_device_id": "system", 00:07:16.288 "dma_device_type": 1 00:07:16.288 }, 00:07:16.288 { 00:07:16.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.288 "dma_device_type": 2 00:07:16.288 } 00:07:16.288 ], 00:07:16.288 "driver_specific": { 00:07:16.288 "passthru": { 00:07:16.288 "name": "Passthru0", 00:07:16.288 "base_bdev_name": "Malloc2" 00:07:16.288 } 00:07:16.288 } 00:07:16.288 } 00:07:16.288 ]' 00:07:16.288 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:16.549 00:07:16.549 real 0m0.336s 00:07:16.549 user 0m0.174s 00:07:16.549 sys 0m0.059s 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.549 20:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.549 ************************************ 00:07:16.549 END TEST rpc_daemon_integrity 00:07:16.549 ************************************ 00:07:16.549 20:20:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:16.549 20:20:10 rpc -- rpc/rpc.sh@84 -- # killprocess 57001 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@954 -- # '[' -z 57001 ']' 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@958 -- # kill -0 57001 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@959 -- # uname 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57001 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.549 killing process with pid 57001 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57001' 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@973 -- # kill 57001 00:07:16.549 20:20:10 rpc -- common/autotest_common.sh@978 -- # wait 57001 00:07:19.188 00:07:19.188 real 0m5.565s 00:07:19.188 user 0m6.099s 00:07:19.188 sys 0m0.902s 00:07:19.188 20:20:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.188 20:20:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.188 ************************************ 00:07:19.188 END TEST rpc 00:07:19.188 ************************************ 00:07:19.188 20:20:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:19.188 20:20:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.188 20:20:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.188 20:20:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.448 ************************************ 00:07:19.448 START TEST skip_rpc 00:07:19.448 ************************************ 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:19.448 * Looking for test storage... 00:07:19.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.448 20:20:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.448 --rc genhtml_branch_coverage=1 00:07:19.448 --rc genhtml_function_coverage=1 00:07:19.448 --rc genhtml_legend=1 00:07:19.448 --rc geninfo_all_blocks=1 00:07:19.448 --rc geninfo_unexecuted_blocks=1 00:07:19.448 00:07:19.448 ' 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.448 --rc genhtml_branch_coverage=1 00:07:19.448 --rc genhtml_function_coverage=1 00:07:19.448 --rc genhtml_legend=1 00:07:19.448 --rc geninfo_all_blocks=1 00:07:19.448 --rc geninfo_unexecuted_blocks=1 00:07:19.448 00:07:19.448 ' 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.448 --rc genhtml_branch_coverage=1 00:07:19.448 --rc genhtml_function_coverage=1 00:07:19.448 --rc genhtml_legend=1 00:07:19.448 --rc geninfo_all_blocks=1 00:07:19.448 --rc geninfo_unexecuted_blocks=1 00:07:19.448 00:07:19.448 ' 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.448 --rc genhtml_branch_coverage=1 00:07:19.448 --rc genhtml_function_coverage=1 00:07:19.448 --rc genhtml_legend=1 00:07:19.448 --rc geninfo_all_blocks=1 00:07:19.448 --rc geninfo_unexecuted_blocks=1 00:07:19.448 00:07:19.448 ' 00:07:19.448 20:20:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:19.448 20:20:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:19.448 20:20:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.448 20:20:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.448 ************************************ 00:07:19.448 START TEST skip_rpc 00:07:19.449 ************************************ 00:07:19.449 20:20:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:19.449 20:20:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:19.449 20:20:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57230 00:07:19.449 20:20:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.449 20:20:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:19.709 [2024-11-26 20:20:13.078121] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:19.709 [2024-11-26 20:20:13.078268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57230 ] 00:07:19.709 [2024-11-26 20:20:13.258330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.969 [2024-11-26 20:20:13.394019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:25.251 20:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57230 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57230 ']' 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57230 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57230 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.251 killing process with pid 57230 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57230' 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57230 00:07:25.251 20:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57230 00:07:27.159 00:07:27.159 real 0m7.582s 00:07:27.159 user 0m7.102s 00:07:27.159 sys 0m0.394s 00:07:27.159 20:20:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.159 20:20:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 ************************************ 00:07:27.159 END TEST skip_rpc 00:07:27.159 ************************************ 00:07:27.159 20:20:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:27.159 20:20:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.159 20:20:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.159 20:20:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 ************************************ 00:07:27.159 START TEST skip_rpc_with_json 00:07:27.159 ************************************ 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57334 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57334 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57334 ']' 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.159 20:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:27.419 [2024-11-26 20:20:20.740537] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:27.419 [2024-11-26 20:20:20.740663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57334 ] 00:07:27.419 [2024-11-26 20:20:20.919385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.679 [2024-11-26 20:20:21.052212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.617 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.617 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:28.617 20:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:28.617 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.617 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:28.617 [2024-11-26 20:20:21.973052] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:28.617 request: 00:07:28.617 { 00:07:28.617 "trtype": "tcp", 00:07:28.617 "method": "nvmf_get_transports", 00:07:28.617 "req_id": 1 00:07:28.617 } 00:07:28.617 Got JSON-RPC error response 00:07:28.617 response: 00:07:28.617 { 00:07:28.617 "code": -19, 00:07:28.618 "message": "No such device" 00:07:28.618 } 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:28.618 [2024-11-26 20:20:21.985201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.618 20:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:28.618 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.618 20:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:28.618 { 00:07:28.618 "subsystems": [ 00:07:28.618 { 00:07:28.618 "subsystem": "fsdev", 00:07:28.618 "config": [ 00:07:28.618 { 00:07:28.618 "method": "fsdev_set_opts", 00:07:28.618 "params": { 00:07:28.618 "fsdev_io_pool_size": 65535, 00:07:28.618 "fsdev_io_cache_size": 256 00:07:28.618 } 00:07:28.618 } 00:07:28.618 ] 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "subsystem": "keyring", 00:07:28.618 "config": [] 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "subsystem": "iobuf", 00:07:28.618 "config": [ 00:07:28.618 { 00:07:28.618 "method": "iobuf_set_options", 00:07:28.618 "params": { 00:07:28.618 "small_pool_count": 8192, 00:07:28.618 "large_pool_count": 1024, 00:07:28.618 "small_bufsize": 8192, 00:07:28.618 "large_bufsize": 135168, 00:07:28.618 "enable_numa": false 00:07:28.618 } 00:07:28.618 } 00:07:28.618 ] 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "subsystem": "sock", 00:07:28.618 "config": [ 00:07:28.618 { 00:07:28.618 "method": "sock_set_default_impl", 00:07:28.618 "params": { 00:07:28.618 "impl_name": "posix" 00:07:28.618 } 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "method": "sock_impl_set_options", 00:07:28.618 "params": { 00:07:28.618 "impl_name": "ssl", 00:07:28.618 "recv_buf_size": 4096, 00:07:28.618 "send_buf_size": 4096, 00:07:28.618 "enable_recv_pipe": true, 00:07:28.618 "enable_quickack": false, 00:07:28.618 "enable_placement_id": 0, 00:07:28.618 "enable_zerocopy_send_server": true, 00:07:28.618 "enable_zerocopy_send_client": false, 00:07:28.618 "zerocopy_threshold": 0, 00:07:28.618 "tls_version": 0, 00:07:28.618 "enable_ktls": false 00:07:28.618 } 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "method": "sock_impl_set_options", 00:07:28.618 "params": { 00:07:28.618 "impl_name": "posix", 00:07:28.618 "recv_buf_size": 2097152, 00:07:28.618 "send_buf_size": 2097152, 00:07:28.618 "enable_recv_pipe": true, 00:07:28.618 "enable_quickack": false, 00:07:28.618 "enable_placement_id": 0, 00:07:28.618 "enable_zerocopy_send_server": true, 00:07:28.618 "enable_zerocopy_send_client": false, 00:07:28.618 "zerocopy_threshold": 0, 00:07:28.618 "tls_version": 0, 00:07:28.618 "enable_ktls": false 00:07:28.618 } 00:07:28.618 } 00:07:28.618 ] 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "subsystem": "vmd", 00:07:28.618 "config": [] 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "subsystem": "accel", 00:07:28.618 "config": [ 00:07:28.618 { 00:07:28.618 "method": "accel_set_options", 00:07:28.618 "params": { 00:07:28.618 "small_cache_size": 128, 00:07:28.618 "large_cache_size": 16, 00:07:28.618 "task_count": 2048, 00:07:28.618 "sequence_count": 2048, 00:07:28.618 "buf_count": 2048 00:07:28.618 } 00:07:28.618 } 00:07:28.618 ] 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "subsystem": "bdev", 00:07:28.618 "config": [ 00:07:28.618 { 00:07:28.618 "method": "bdev_set_options", 00:07:28.618 "params": { 00:07:28.618 "bdev_io_pool_size": 65535, 00:07:28.618 "bdev_io_cache_size": 256, 00:07:28.618 "bdev_auto_examine": true, 00:07:28.618 "iobuf_small_cache_size": 128, 00:07:28.618 "iobuf_large_cache_size": 16 00:07:28.618 } 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "method": "bdev_raid_set_options", 00:07:28.618 "params": { 00:07:28.618 "process_window_size_kb": 1024, 00:07:28.618 "process_max_bandwidth_mb_sec": 0 00:07:28.618 } 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "method": "bdev_iscsi_set_options", 00:07:28.618 "params": { 00:07:28.618 "timeout_sec": 30 00:07:28.618 } 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "method": "bdev_nvme_set_options", 00:07:28.618 "params": { 00:07:28.618 "action_on_timeout": "none", 00:07:28.618 "timeout_us": 0, 00:07:28.618 "timeout_admin_us": 0, 00:07:28.618 "keep_alive_timeout_ms": 10000, 00:07:28.618 "arbitration_burst": 0, 00:07:28.618 "low_priority_weight": 0, 00:07:28.618 "medium_priority_weight": 0, 00:07:28.618 "high_priority_weight": 0, 00:07:28.618 "nvme_adminq_poll_period_us": 10000, 00:07:28.618 "nvme_ioq_poll_period_us": 0, 00:07:28.618 "io_queue_requests": 0, 00:07:28.618 "delay_cmd_submit": true, 00:07:28.618 "transport_retry_count": 4, 00:07:28.618 "bdev_retry_count": 3, 00:07:28.618 "transport_ack_timeout": 0, 00:07:28.618 "ctrlr_loss_timeout_sec": 0, 00:07:28.618 "reconnect_delay_sec": 0, 00:07:28.618 "fast_io_fail_timeout_sec": 0, 00:07:28.618 "disable_auto_failback": false, 00:07:28.618 "generate_uuids": false, 00:07:28.618 "transport_tos": 0, 00:07:28.618 "nvme_error_stat": false, 00:07:28.618 "rdma_srq_size": 0, 00:07:28.618 "io_path_stat": false, 00:07:28.618 "allow_accel_sequence": false, 00:07:28.618 "rdma_max_cq_size": 0, 00:07:28.618 "rdma_cm_event_timeout_ms": 0, 00:07:28.618 "dhchap_digests": [ 00:07:28.618 "sha256", 00:07:28.618 "sha384", 00:07:28.618 "sha512" 00:07:28.618 ], 00:07:28.618 "dhchap_dhgroups": [ 00:07:28.618 "null", 00:07:28.618 "ffdhe2048", 00:07:28.618 "ffdhe3072", 00:07:28.618 "ffdhe4096", 00:07:28.618 "ffdhe6144", 00:07:28.618 "ffdhe8192" 00:07:28.618 ] 00:07:28.618 } 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "method": "bdev_nvme_set_hotplug", 00:07:28.619 "params": { 00:07:28.619 "period_us": 100000, 00:07:28.619 "enable": false 00:07:28.619 } 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "method": "bdev_wait_for_examine" 00:07:28.619 } 00:07:28.619 ] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "scsi", 00:07:28.619 "config": null 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "scheduler", 00:07:28.619 "config": [ 00:07:28.619 { 00:07:28.619 "method": "framework_set_scheduler", 00:07:28.619 "params": { 00:07:28.619 "name": "static" 00:07:28.619 } 00:07:28.619 } 00:07:28.619 ] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "vhost_scsi", 00:07:28.619 "config": [] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "vhost_blk", 00:07:28.619 "config": [] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "ublk", 00:07:28.619 "config": [] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "nbd", 00:07:28.619 "config": [] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "nvmf", 00:07:28.619 "config": [ 00:07:28.619 { 00:07:28.619 "method": "nvmf_set_config", 00:07:28.619 "params": { 00:07:28.619 "discovery_filter": "match_any", 00:07:28.619 "admin_cmd_passthru": { 00:07:28.619 "identify_ctrlr": false 00:07:28.619 }, 00:07:28.619 "dhchap_digests": [ 00:07:28.619 "sha256", 00:07:28.619 "sha384", 00:07:28.619 "sha512" 00:07:28.619 ], 00:07:28.619 "dhchap_dhgroups": [ 00:07:28.619 "null", 00:07:28.619 "ffdhe2048", 00:07:28.619 "ffdhe3072", 00:07:28.619 "ffdhe4096", 00:07:28.619 "ffdhe6144", 00:07:28.619 "ffdhe8192" 00:07:28.619 ] 00:07:28.619 } 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "method": "nvmf_set_max_subsystems", 00:07:28.619 "params": { 00:07:28.619 "max_subsystems": 1024 00:07:28.619 } 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "method": "nvmf_set_crdt", 00:07:28.619 "params": { 00:07:28.619 "crdt1": 0, 00:07:28.619 "crdt2": 0, 00:07:28.619 "crdt3": 0 00:07:28.619 } 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "method": "nvmf_create_transport", 00:07:28.619 "params": { 00:07:28.619 "trtype": "TCP", 00:07:28.619 "max_queue_depth": 128, 00:07:28.619 "max_io_qpairs_per_ctrlr": 127, 00:07:28.619 "in_capsule_data_size": 4096, 00:07:28.619 "max_io_size": 131072, 00:07:28.619 "io_unit_size": 131072, 00:07:28.619 "max_aq_depth": 128, 00:07:28.619 "num_shared_buffers": 511, 00:07:28.619 "buf_cache_size": 4294967295, 00:07:28.619 "dif_insert_or_strip": false, 00:07:28.619 "zcopy": false, 00:07:28.619 "c2h_success": true, 00:07:28.619 "sock_priority": 0, 00:07:28.619 "abort_timeout_sec": 1, 00:07:28.619 "ack_timeout": 0, 00:07:28.619 "data_wr_pool_size": 0 00:07:28.619 } 00:07:28.619 } 00:07:28.619 ] 00:07:28.619 }, 00:07:28.619 { 00:07:28.619 "subsystem": "iscsi", 00:07:28.619 "config": [ 00:07:28.619 { 00:07:28.619 "method": "iscsi_set_options", 00:07:28.619 "params": { 00:07:28.619 "node_base": "iqn.2016-06.io.spdk", 00:07:28.619 "max_sessions": 128, 00:07:28.619 "max_connections_per_session": 2, 00:07:28.619 "max_queue_depth": 64, 00:07:28.619 "default_time2wait": 2, 00:07:28.619 "default_time2retain": 20, 00:07:28.619 "first_burst_length": 8192, 00:07:28.619 "immediate_data": true, 00:07:28.619 "allow_duplicated_isid": false, 00:07:28.619 "error_recovery_level": 0, 00:07:28.619 "nop_timeout": 60, 00:07:28.619 "nop_in_interval": 30, 00:07:28.619 "disable_chap": false, 00:07:28.619 "require_chap": false, 00:07:28.619 "mutual_chap": false, 00:07:28.619 "chap_group": 0, 00:07:28.619 "max_large_datain_per_connection": 64, 00:07:28.619 "max_r2t_per_connection": 4, 00:07:28.619 "pdu_pool_size": 36864, 00:07:28.619 "immediate_data_pool_size": 16384, 00:07:28.619 "data_out_pool_size": 2048 00:07:28.619 } 00:07:28.619 } 00:07:28.619 ] 00:07:28.619 } 00:07:28.619 ] 00:07:28.619 } 00:07:28.619 20:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:28.619 20:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57334 00:07:28.619 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57334 ']' 00:07:28.619 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57334 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57334 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.879 killing process with pid 57334 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57334' 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57334 00:07:28.879 20:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57334 00:07:31.416 20:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57390 00:07:31.416 20:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:31.416 20:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:36.690 20:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57390 00:07:36.690 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57390 ']' 00:07:36.690 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57390 00:07:36.690 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57390 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.691 killing process with pid 57390 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57390' 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57390 00:07:36.691 20:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57390 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:39.232 00:07:39.232 real 0m11.793s 00:07:39.232 user 0m11.248s 00:07:39.232 sys 0m0.880s 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 ************************************ 00:07:39.232 END TEST skip_rpc_with_json 00:07:39.232 ************************************ 00:07:39.232 20:20:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:39.232 20:20:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.232 20:20:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.232 20:20:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 ************************************ 00:07:39.232 START TEST skip_rpc_with_delay 00:07:39.232 ************************************ 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:39.232 [2024-11-26 20:20:32.607846] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.232 00:07:39.232 real 0m0.183s 00:07:39.232 user 0m0.091s 00:07:39.232 sys 0m0.090s 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.232 20:20:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 ************************************ 00:07:39.232 END TEST skip_rpc_with_delay 00:07:39.232 ************************************ 00:07:39.232 20:20:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:39.232 20:20:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:39.232 20:20:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:39.232 20:20:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.232 20:20:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.232 20:20:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 ************************************ 00:07:39.232 START TEST exit_on_failed_rpc_init 00:07:39.232 ************************************ 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57529 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57529 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57529 ']' 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.232 20:20:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 [2024-11-26 20:20:32.860302] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:39.538 [2024-11-26 20:20:32.860425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57529 ] 00:07:39.538 [2024-11-26 20:20:33.043420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.809 [2024-11-26 20:20:33.171977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:40.745 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:40.745 [2024-11-26 20:20:34.268006] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:40.745 [2024-11-26 20:20:34.268158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57553 ] 00:07:41.004 [2024-11-26 20:20:34.444352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.263 [2024-11-26 20:20:34.587085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.263 [2024-11-26 20:20:34.587191] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:41.263 [2024-11-26 20:20:34.587207] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:41.263 [2024-11-26 20:20:34.587221] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57529 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57529 ']' 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57529 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57529 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57529' 00:07:41.523 killing process with pid 57529 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57529 00:07:41.523 20:20:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57529 00:07:44.058 00:07:44.058 real 0m4.797s 00:07:44.058 user 0m5.238s 00:07:44.058 sys 0m0.585s 00:07:44.058 20:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.058 20:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:44.058 ************************************ 00:07:44.058 END TEST exit_on_failed_rpc_init 00:07:44.058 ************************************ 00:07:44.058 20:20:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:44.058 00:07:44.058 real 0m24.862s 00:07:44.058 user 0m23.901s 00:07:44.058 sys 0m2.257s 00:07:44.058 20:20:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.058 20:20:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.058 ************************************ 00:07:44.058 END TEST skip_rpc 00:07:44.058 ************************************ 00:07:44.316 20:20:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:44.316 20:20:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.316 20:20:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.316 20:20:37 -- common/autotest_common.sh@10 -- # set +x 00:07:44.316 ************************************ 00:07:44.316 START TEST rpc_client 00:07:44.316 ************************************ 00:07:44.316 20:20:37 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:44.316 * Looking for test storage... 00:07:44.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:44.316 20:20:37 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.316 20:20:37 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.316 20:20:37 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.316 20:20:37 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.316 20:20:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.316 20:20:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.316 20:20:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.316 20:20:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.317 20:20:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.575 20:20:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:44.575 20:20:37 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.575 20:20:37 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.575 --rc genhtml_branch_coverage=1 00:07:44.575 --rc genhtml_function_coverage=1 00:07:44.575 --rc genhtml_legend=1 00:07:44.575 --rc geninfo_all_blocks=1 00:07:44.575 --rc geninfo_unexecuted_blocks=1 00:07:44.575 00:07:44.575 ' 00:07:44.575 20:20:37 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.575 --rc genhtml_branch_coverage=1 00:07:44.575 --rc genhtml_function_coverage=1 00:07:44.575 --rc genhtml_legend=1 00:07:44.575 --rc geninfo_all_blocks=1 00:07:44.575 --rc geninfo_unexecuted_blocks=1 00:07:44.575 00:07:44.575 ' 00:07:44.575 20:20:37 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.575 --rc genhtml_branch_coverage=1 00:07:44.575 --rc genhtml_function_coverage=1 00:07:44.575 --rc genhtml_legend=1 00:07:44.575 --rc geninfo_all_blocks=1 00:07:44.575 --rc geninfo_unexecuted_blocks=1 00:07:44.575 00:07:44.575 ' 00:07:44.576 20:20:37 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.576 --rc genhtml_branch_coverage=1 00:07:44.576 --rc genhtml_function_coverage=1 00:07:44.576 --rc genhtml_legend=1 00:07:44.576 --rc geninfo_all_blocks=1 00:07:44.576 --rc geninfo_unexecuted_blocks=1 00:07:44.576 00:07:44.576 ' 00:07:44.576 20:20:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:44.576 OK 00:07:44.576 20:20:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:44.576 00:07:44.576 real 0m0.289s 00:07:44.576 user 0m0.161s 00:07:44.576 sys 0m0.145s 00:07:44.576 20:20:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.576 20:20:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:44.576 ************************************ 00:07:44.576 END TEST rpc_client 00:07:44.576 ************************************ 00:07:44.576 20:20:38 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:44.576 20:20:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.576 20:20:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.576 20:20:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.576 ************************************ 00:07:44.576 START TEST json_config 00:07:44.576 ************************************ 00:07:44.576 20:20:38 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:44.576 20:20:38 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.576 20:20:38 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.576 20:20:38 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.834 20:20:38 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.834 20:20:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.834 20:20:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.834 20:20:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.834 20:20:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.834 20:20:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.834 20:20:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.834 20:20:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.834 20:20:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.834 20:20:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.834 20:20:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.834 20:20:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.835 20:20:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:44.835 20:20:38 json_config -- scripts/common.sh@345 -- # : 1 00:07:44.835 20:20:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.835 20:20:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.835 20:20:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:44.835 20:20:38 json_config -- scripts/common.sh@353 -- # local d=1 00:07:44.835 20:20:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.835 20:20:38 json_config -- scripts/common.sh@355 -- # echo 1 00:07:44.835 20:20:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.835 20:20:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:44.835 20:20:38 json_config -- scripts/common.sh@353 -- # local d=2 00:07:44.835 20:20:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.835 20:20:38 json_config -- scripts/common.sh@355 -- # echo 2 00:07:44.835 20:20:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.835 20:20:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.835 20:20:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.835 20:20:38 json_config -- scripts/common.sh@368 -- # return 0 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.835 --rc genhtml_branch_coverage=1 00:07:44.835 --rc genhtml_function_coverage=1 00:07:44.835 --rc genhtml_legend=1 00:07:44.835 --rc geninfo_all_blocks=1 00:07:44.835 --rc geninfo_unexecuted_blocks=1 00:07:44.835 00:07:44.835 ' 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.835 --rc genhtml_branch_coverage=1 00:07:44.835 --rc genhtml_function_coverage=1 00:07:44.835 --rc genhtml_legend=1 00:07:44.835 --rc geninfo_all_blocks=1 00:07:44.835 --rc geninfo_unexecuted_blocks=1 00:07:44.835 00:07:44.835 ' 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.835 --rc genhtml_branch_coverage=1 00:07:44.835 --rc genhtml_function_coverage=1 00:07:44.835 --rc genhtml_legend=1 00:07:44.835 --rc geninfo_all_blocks=1 00:07:44.835 --rc geninfo_unexecuted_blocks=1 00:07:44.835 00:07:44.835 ' 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.835 --rc genhtml_branch_coverage=1 00:07:44.835 --rc genhtml_function_coverage=1 00:07:44.835 --rc genhtml_legend=1 00:07:44.835 --rc geninfo_all_blocks=1 00:07:44.835 --rc geninfo_unexecuted_blocks=1 00:07:44.835 00:07:44.835 ' 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:87890ee8-f77f-4451-b4c6-6875f86d77cd 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=87890ee8-f77f-4451-b4c6-6875f86d77cd 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.835 20:20:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.835 20:20:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.835 20:20:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.835 20:20:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.835 20:20:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.835 20:20:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.835 20:20:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.835 20:20:38 json_config -- paths/export.sh@5 -- # export PATH 00:07:44.835 20:20:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@51 -- # : 0 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.835 20:20:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:44.835 WARNING: No tests are enabled so not running JSON configuration tests 00:07:44.835 20:20:38 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:44.835 00:07:44.835 real 0m0.239s 00:07:44.835 user 0m0.143s 00:07:44.835 sys 0m0.098s 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.835 20:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.835 ************************************ 00:07:44.835 END TEST json_config 00:07:44.835 ************************************ 00:07:44.835 20:20:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:44.835 20:20:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.835 20:20:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.835 20:20:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.835 ************************************ 00:07:44.835 START TEST json_config_extra_key 00:07:44.835 ************************************ 00:07:44.835 20:20:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:44.835 20:20:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.095 20:20:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.095 20:20:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.095 20:20:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.095 20:20:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.096 20:20:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.096 --rc genhtml_branch_coverage=1 00:07:45.096 --rc genhtml_function_coverage=1 00:07:45.096 --rc genhtml_legend=1 00:07:45.096 --rc geninfo_all_blocks=1 00:07:45.096 --rc geninfo_unexecuted_blocks=1 00:07:45.096 00:07:45.096 ' 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.096 --rc genhtml_branch_coverage=1 00:07:45.096 --rc genhtml_function_coverage=1 00:07:45.096 --rc genhtml_legend=1 00:07:45.096 --rc geninfo_all_blocks=1 00:07:45.096 --rc geninfo_unexecuted_blocks=1 00:07:45.096 00:07:45.096 ' 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.096 --rc genhtml_branch_coverage=1 00:07:45.096 --rc genhtml_function_coverage=1 00:07:45.096 --rc genhtml_legend=1 00:07:45.096 --rc geninfo_all_blocks=1 00:07:45.096 --rc geninfo_unexecuted_blocks=1 00:07:45.096 00:07:45.096 ' 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.096 --rc genhtml_branch_coverage=1 00:07:45.096 --rc genhtml_function_coverage=1 00:07:45.096 --rc genhtml_legend=1 00:07:45.096 --rc geninfo_all_blocks=1 00:07:45.096 --rc geninfo_unexecuted_blocks=1 00:07:45.096 00:07:45.096 ' 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:87890ee8-f77f-4451-b4c6-6875f86d77cd 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=87890ee8-f77f-4451-b4c6-6875f86d77cd 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.096 20:20:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.096 20:20:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.096 20:20:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.096 20:20:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.096 20:20:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.096 20:20:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.096 20:20:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.096 20:20:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:45.096 20:20:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.096 20:20:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.096 INFO: launching applications... 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:45.096 20:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57763 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:45.096 Waiting for target to run... 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57763 /var/tmp/spdk_tgt.sock 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57763 ']' 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:45.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:45.096 20:20:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.096 20:20:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:45.356 [2024-11-26 20:20:38.668560] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:45.356 [2024-11-26 20:20:38.668834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57763 ] 00:07:45.616 [2024-11-26 20:20:39.079595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.881 [2024-11-26 20:20:39.202602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.832 20:20:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.832 20:20:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:46.832 00:07:46.832 20:20:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:46.832 INFO: shutting down applications... 00:07:46.832 20:20:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57763 ]] 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57763 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:46.832 20:20:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:47.093 20:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:47.093 20:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:47.093 20:20:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:47.093 20:20:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:47.664 20:20:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:47.664 20:20:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:47.664 20:20:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:47.664 20:20:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:48.233 20:20:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:48.233 20:20:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:48.233 20:20:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:48.233 20:20:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:48.802 20:20:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:48.802 20:20:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:48.802 20:20:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:48.802 20:20:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:49.062 20:20:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:49.062 20:20:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.062 20:20:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:49.062 20:20:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:49.631 20:20:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:49.631 20:20:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.631 20:20:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:49.631 20:20:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:50.201 SPDK target shutdown done 00:07:50.201 20:20:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:50.201 Success 00:07:50.201 20:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:50.201 ************************************ 00:07:50.201 END TEST json_config_extra_key 00:07:50.201 ************************************ 00:07:50.201 00:07:50.201 real 0m5.290s 00:07:50.201 user 0m4.766s 00:07:50.201 sys 0m0.597s 00:07:50.201 20:20:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.201 20:20:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:50.201 20:20:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:50.201 20:20:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.201 20:20:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.201 20:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:50.201 ************************************ 00:07:50.201 START TEST alias_rpc 00:07:50.201 ************************************ 00:07:50.201 20:20:43 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:50.461 * Looking for test storage... 00:07:50.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.461 20:20:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:20:43 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.461 20:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.461 20:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57882 00:07:50.461 20:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57882 00:07:50.462 20:20:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57882 ']' 00:07:50.462 20:20:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.462 20:20:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.462 20:20:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.462 20:20:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.462 20:20:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.462 [2024-11-26 20:20:43.971554] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:50.462 [2024-11-26 20:20:43.971788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57882 ] 00:07:50.721 [2024-11-26 20:20:44.150948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.981 [2024-11-26 20:20:44.281926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.920 20:20:45 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.920 20:20:45 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.920 20:20:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:52.179 20:20:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57882 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57882 ']' 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57882 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57882 00:07:52.179 killing process with pid 57882 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57882' 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 57882 00:07:52.179 20:20:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 57882 00:07:55.537 ************************************ 00:07:55.537 END TEST alias_rpc 00:07:55.537 ************************************ 00:07:55.537 00:07:55.537 real 0m4.753s 00:07:55.537 user 0m4.847s 00:07:55.537 sys 0m0.552s 00:07:55.537 20:20:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.537 20:20:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 20:20:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:55.537 20:20:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:55.537 20:20:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.537 20:20:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.537 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 ************************************ 00:07:55.537 START TEST spdkcli_tcp 00:07:55.537 ************************************ 00:07:55.537 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:55.537 * Looking for test storage... 00:07:55.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:55.537 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.537 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.537 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.537 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:55.537 20:20:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.537 20:20:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.538 20:20:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.538 --rc genhtml_branch_coverage=1 00:07:55.538 --rc genhtml_function_coverage=1 00:07:55.538 --rc genhtml_legend=1 00:07:55.538 --rc geninfo_all_blocks=1 00:07:55.538 --rc geninfo_unexecuted_blocks=1 00:07:55.538 00:07:55.538 ' 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.538 --rc genhtml_branch_coverage=1 00:07:55.538 --rc genhtml_function_coverage=1 00:07:55.538 --rc genhtml_legend=1 00:07:55.538 --rc geninfo_all_blocks=1 00:07:55.538 --rc geninfo_unexecuted_blocks=1 00:07:55.538 00:07:55.538 ' 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.538 --rc genhtml_branch_coverage=1 00:07:55.538 --rc genhtml_function_coverage=1 00:07:55.538 --rc genhtml_legend=1 00:07:55.538 --rc geninfo_all_blocks=1 00:07:55.538 --rc geninfo_unexecuted_blocks=1 00:07:55.538 00:07:55.538 ' 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.538 --rc genhtml_branch_coverage=1 00:07:55.538 --rc genhtml_function_coverage=1 00:07:55.538 --rc genhtml_legend=1 00:07:55.538 --rc geninfo_all_blocks=1 00:07:55.538 --rc geninfo_unexecuted_blocks=1 00:07:55.538 00:07:55.538 ' 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57999 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57999 00:07:55.538 20:20:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57999 ']' 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.538 20:20:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.538 [2024-11-26 20:20:48.814725] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:07:55.538 [2024-11-26 20:20:48.814976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:07:55.538 [2024-11-26 20:20:48.977060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.798 [2024-11-26 20:20:49.107187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.798 [2024-11-26 20:20:49.107194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.739 20:20:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.739 20:20:50 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:56.739 20:20:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58016 00:07:56.739 20:20:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:56.739 20:20:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:56.998 [ 00:07:56.998 "bdev_malloc_delete", 00:07:56.998 "bdev_malloc_create", 00:07:56.998 "bdev_null_resize", 00:07:56.998 "bdev_null_delete", 00:07:56.998 "bdev_null_create", 00:07:56.998 "bdev_nvme_cuse_unregister", 00:07:56.998 "bdev_nvme_cuse_register", 00:07:56.998 "bdev_opal_new_user", 00:07:56.998 "bdev_opal_set_lock_state", 00:07:56.998 "bdev_opal_delete", 00:07:56.998 "bdev_opal_get_info", 00:07:56.998 "bdev_opal_create", 00:07:56.998 "bdev_nvme_opal_revert", 00:07:56.998 "bdev_nvme_opal_init", 00:07:56.998 "bdev_nvme_send_cmd", 00:07:56.998 "bdev_nvme_set_keys", 00:07:56.998 "bdev_nvme_get_path_iostat", 00:07:56.998 "bdev_nvme_get_mdns_discovery_info", 00:07:56.998 "bdev_nvme_stop_mdns_discovery", 00:07:56.998 "bdev_nvme_start_mdns_discovery", 00:07:56.998 "bdev_nvme_set_multipath_policy", 00:07:56.998 "bdev_nvme_set_preferred_path", 00:07:56.998 "bdev_nvme_get_io_paths", 00:07:56.998 "bdev_nvme_remove_error_injection", 00:07:56.998 "bdev_nvme_add_error_injection", 00:07:56.998 "bdev_nvme_get_discovery_info", 00:07:56.998 "bdev_nvme_stop_discovery", 00:07:56.998 "bdev_nvme_start_discovery", 00:07:56.998 "bdev_nvme_get_controller_health_info", 00:07:56.998 "bdev_nvme_disable_controller", 00:07:56.998 "bdev_nvme_enable_controller", 00:07:56.998 "bdev_nvme_reset_controller", 00:07:56.998 "bdev_nvme_get_transport_statistics", 00:07:56.998 "bdev_nvme_apply_firmware", 00:07:56.998 "bdev_nvme_detach_controller", 00:07:56.998 "bdev_nvme_get_controllers", 00:07:56.998 "bdev_nvme_attach_controller", 00:07:56.998 "bdev_nvme_set_hotplug", 00:07:56.998 "bdev_nvme_set_options", 00:07:56.998 "bdev_passthru_delete", 00:07:56.998 "bdev_passthru_create", 00:07:56.998 "bdev_lvol_set_parent_bdev", 00:07:56.998 "bdev_lvol_set_parent", 00:07:56.998 "bdev_lvol_check_shallow_copy", 00:07:56.998 "bdev_lvol_start_shallow_copy", 00:07:56.998 "bdev_lvol_grow_lvstore", 00:07:56.998 "bdev_lvol_get_lvols", 00:07:56.998 "bdev_lvol_get_lvstores", 00:07:56.998 "bdev_lvol_delete", 00:07:56.998 "bdev_lvol_set_read_only", 00:07:56.998 "bdev_lvol_resize", 00:07:56.998 "bdev_lvol_decouple_parent", 00:07:56.998 "bdev_lvol_inflate", 00:07:56.998 "bdev_lvol_rename", 00:07:56.998 "bdev_lvol_clone_bdev", 00:07:56.998 "bdev_lvol_clone", 00:07:56.998 "bdev_lvol_snapshot", 00:07:56.998 "bdev_lvol_create", 00:07:56.998 "bdev_lvol_delete_lvstore", 00:07:56.998 "bdev_lvol_rename_lvstore", 00:07:56.998 "bdev_lvol_create_lvstore", 00:07:56.998 "bdev_raid_set_options", 00:07:56.998 "bdev_raid_remove_base_bdev", 00:07:56.998 "bdev_raid_add_base_bdev", 00:07:56.998 "bdev_raid_delete", 00:07:56.998 "bdev_raid_create", 00:07:56.998 "bdev_raid_get_bdevs", 00:07:56.998 "bdev_error_inject_error", 00:07:56.998 "bdev_error_delete", 00:07:56.998 "bdev_error_create", 00:07:56.998 "bdev_split_delete", 00:07:56.998 "bdev_split_create", 00:07:56.998 "bdev_delay_delete", 00:07:56.998 "bdev_delay_create", 00:07:56.999 "bdev_delay_update_latency", 00:07:56.999 "bdev_zone_block_delete", 00:07:56.999 "bdev_zone_block_create", 00:07:56.999 "blobfs_create", 00:07:56.999 "blobfs_detect", 00:07:56.999 "blobfs_set_cache_size", 00:07:56.999 "bdev_aio_delete", 00:07:56.999 "bdev_aio_rescan", 00:07:56.999 "bdev_aio_create", 00:07:56.999 "bdev_ftl_set_property", 00:07:56.999 "bdev_ftl_get_properties", 00:07:56.999 "bdev_ftl_get_stats", 00:07:56.999 "bdev_ftl_unmap", 00:07:56.999 "bdev_ftl_unload", 00:07:56.999 "bdev_ftl_delete", 00:07:56.999 "bdev_ftl_load", 00:07:56.999 "bdev_ftl_create", 00:07:56.999 "bdev_virtio_attach_controller", 00:07:56.999 "bdev_virtio_scsi_get_devices", 00:07:56.999 "bdev_virtio_detach_controller", 00:07:56.999 "bdev_virtio_blk_set_hotplug", 00:07:56.999 "bdev_iscsi_delete", 00:07:56.999 "bdev_iscsi_create", 00:07:56.999 "bdev_iscsi_set_options", 00:07:56.999 "accel_error_inject_error", 00:07:56.999 "ioat_scan_accel_module", 00:07:56.999 "dsa_scan_accel_module", 00:07:56.999 "iaa_scan_accel_module", 00:07:56.999 "keyring_file_remove_key", 00:07:56.999 "keyring_file_add_key", 00:07:56.999 "keyring_linux_set_options", 00:07:56.999 "fsdev_aio_delete", 00:07:56.999 "fsdev_aio_create", 00:07:56.999 "iscsi_get_histogram", 00:07:56.999 "iscsi_enable_histogram", 00:07:56.999 "iscsi_set_options", 00:07:56.999 "iscsi_get_auth_groups", 00:07:56.999 "iscsi_auth_group_remove_secret", 00:07:56.999 "iscsi_auth_group_add_secret", 00:07:56.999 "iscsi_delete_auth_group", 00:07:56.999 "iscsi_create_auth_group", 00:07:56.999 "iscsi_set_discovery_auth", 00:07:56.999 "iscsi_get_options", 00:07:56.999 "iscsi_target_node_request_logout", 00:07:56.999 "iscsi_target_node_set_redirect", 00:07:56.999 "iscsi_target_node_set_auth", 00:07:56.999 "iscsi_target_node_add_lun", 00:07:56.999 "iscsi_get_stats", 00:07:56.999 "iscsi_get_connections", 00:07:56.999 "iscsi_portal_group_set_auth", 00:07:56.999 "iscsi_start_portal_group", 00:07:56.999 "iscsi_delete_portal_group", 00:07:56.999 "iscsi_create_portal_group", 00:07:56.999 "iscsi_get_portal_groups", 00:07:56.999 "iscsi_delete_target_node", 00:07:56.999 "iscsi_target_node_remove_pg_ig_maps", 00:07:56.999 "iscsi_target_node_add_pg_ig_maps", 00:07:56.999 "iscsi_create_target_node", 00:07:56.999 "iscsi_get_target_nodes", 00:07:56.999 "iscsi_delete_initiator_group", 00:07:56.999 "iscsi_initiator_group_remove_initiators", 00:07:56.999 "iscsi_initiator_group_add_initiators", 00:07:56.999 "iscsi_create_initiator_group", 00:07:56.999 "iscsi_get_initiator_groups", 00:07:56.999 "nvmf_set_crdt", 00:07:56.999 "nvmf_set_config", 00:07:56.999 "nvmf_set_max_subsystems", 00:07:56.999 "nvmf_stop_mdns_prr", 00:07:56.999 "nvmf_publish_mdns_prr", 00:07:56.999 "nvmf_subsystem_get_listeners", 00:07:56.999 "nvmf_subsystem_get_qpairs", 00:07:56.999 "nvmf_subsystem_get_controllers", 00:07:56.999 "nvmf_get_stats", 00:07:56.999 "nvmf_get_transports", 00:07:56.999 "nvmf_create_transport", 00:07:56.999 "nvmf_get_targets", 00:07:56.999 "nvmf_delete_target", 00:07:56.999 "nvmf_create_target", 00:07:56.999 "nvmf_subsystem_allow_any_host", 00:07:56.999 "nvmf_subsystem_set_keys", 00:07:56.999 "nvmf_subsystem_remove_host", 00:07:56.999 "nvmf_subsystem_add_host", 00:07:56.999 "nvmf_ns_remove_host", 00:07:56.999 "nvmf_ns_add_host", 00:07:56.999 "nvmf_subsystem_remove_ns", 00:07:56.999 "nvmf_subsystem_set_ns_ana_group", 00:07:56.999 "nvmf_subsystem_add_ns", 00:07:56.999 "nvmf_subsystem_listener_set_ana_state", 00:07:56.999 "nvmf_discovery_get_referrals", 00:07:56.999 "nvmf_discovery_remove_referral", 00:07:56.999 "nvmf_discovery_add_referral", 00:07:56.999 "nvmf_subsystem_remove_listener", 00:07:56.999 "nvmf_subsystem_add_listener", 00:07:56.999 "nvmf_delete_subsystem", 00:07:56.999 "nvmf_create_subsystem", 00:07:56.999 "nvmf_get_subsystems", 00:07:56.999 "env_dpdk_get_mem_stats", 00:07:56.999 "nbd_get_disks", 00:07:56.999 "nbd_stop_disk", 00:07:56.999 "nbd_start_disk", 00:07:56.999 "ublk_recover_disk", 00:07:56.999 "ublk_get_disks", 00:07:56.999 "ublk_stop_disk", 00:07:56.999 "ublk_start_disk", 00:07:56.999 "ublk_destroy_target", 00:07:56.999 "ublk_create_target", 00:07:56.999 "virtio_blk_create_transport", 00:07:56.999 "virtio_blk_get_transports", 00:07:56.999 "vhost_controller_set_coalescing", 00:07:56.999 "vhost_get_controllers", 00:07:56.999 "vhost_delete_controller", 00:07:56.999 "vhost_create_blk_controller", 00:07:56.999 "vhost_scsi_controller_remove_target", 00:07:56.999 "vhost_scsi_controller_add_target", 00:07:56.999 "vhost_start_scsi_controller", 00:07:56.999 "vhost_create_scsi_controller", 00:07:56.999 "thread_set_cpumask", 00:07:56.999 "scheduler_set_options", 00:07:56.999 "framework_get_governor", 00:07:56.999 "framework_get_scheduler", 00:07:56.999 "framework_set_scheduler", 00:07:56.999 "framework_get_reactors", 00:07:56.999 "thread_get_io_channels", 00:07:56.999 "thread_get_pollers", 00:07:56.999 "thread_get_stats", 00:07:56.999 "framework_monitor_context_switch", 00:07:56.999 "spdk_kill_instance", 00:07:56.999 "log_enable_timestamps", 00:07:56.999 "log_get_flags", 00:07:56.999 "log_clear_flag", 00:07:56.999 "log_set_flag", 00:07:56.999 "log_get_level", 00:07:56.999 "log_set_level", 00:07:56.999 "log_get_print_level", 00:07:56.999 "log_set_print_level", 00:07:56.999 "framework_enable_cpumask_locks", 00:07:56.999 "framework_disable_cpumask_locks", 00:07:56.999 "framework_wait_init", 00:07:56.999 "framework_start_init", 00:07:56.999 "scsi_get_devices", 00:07:56.999 "bdev_get_histogram", 00:07:56.999 "bdev_enable_histogram", 00:07:56.999 "bdev_set_qos_limit", 00:07:56.999 "bdev_set_qd_sampling_period", 00:07:56.999 "bdev_get_bdevs", 00:07:56.999 "bdev_reset_iostat", 00:07:56.999 "bdev_get_iostat", 00:07:56.999 "bdev_examine", 00:07:56.999 "bdev_wait_for_examine", 00:07:56.999 "bdev_set_options", 00:07:56.999 "accel_get_stats", 00:07:56.999 "accel_set_options", 00:07:56.999 "accel_set_driver", 00:07:56.999 "accel_crypto_key_destroy", 00:07:56.999 "accel_crypto_keys_get", 00:07:56.999 "accel_crypto_key_create", 00:07:56.999 "accel_assign_opc", 00:07:56.999 "accel_get_module_info", 00:07:56.999 "accel_get_opc_assignments", 00:07:56.999 "vmd_rescan", 00:07:56.999 "vmd_remove_device", 00:07:56.999 "vmd_enable", 00:07:56.999 "sock_get_default_impl", 00:07:56.999 "sock_set_default_impl", 00:07:56.999 "sock_impl_set_options", 00:07:56.999 "sock_impl_get_options", 00:07:56.999 "iobuf_get_stats", 00:07:56.999 "iobuf_set_options", 00:07:56.999 "keyring_get_keys", 00:07:56.999 "framework_get_pci_devices", 00:07:56.999 "framework_get_config", 00:07:56.999 "framework_get_subsystems", 00:07:56.999 "fsdev_set_opts", 00:07:56.999 "fsdev_get_opts", 00:07:56.999 "trace_get_info", 00:07:56.999 "trace_get_tpoint_group_mask", 00:07:56.999 "trace_disable_tpoint_group", 00:07:56.999 "trace_enable_tpoint_group", 00:07:56.999 "trace_clear_tpoint_mask", 00:07:56.999 "trace_set_tpoint_mask", 00:07:56.999 "notify_get_notifications", 00:07:56.999 "notify_get_types", 00:07:56.999 "spdk_get_version", 00:07:56.999 "rpc_get_methods" 00:07:56.999 ] 00:07:56.999 20:20:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.999 20:20:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:56.999 20:20:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57999 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57999 ']' 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57999 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57999 00:07:56.999 killing process with pid 57999 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57999' 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57999 00:07:56.999 20:20:50 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57999 00:08:00.290 00:08:00.290 real 0m4.817s 00:08:00.290 user 0m8.769s 00:08:00.290 sys 0m0.634s 00:08:00.290 20:20:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.290 20:20:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.290 ************************************ 00:08:00.290 END TEST spdkcli_tcp 00:08:00.290 ************************************ 00:08:00.290 20:20:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:00.290 20:20:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.290 20:20:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.290 20:20:53 -- common/autotest_common.sh@10 -- # set +x 00:08:00.290 ************************************ 00:08:00.290 START TEST dpdk_mem_utility 00:08:00.290 ************************************ 00:08:00.290 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:00.290 * Looking for test storage... 00:08:00.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:00.290 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.290 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.290 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.290 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:00.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.290 20:20:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.291 --rc genhtml_branch_coverage=1 00:08:00.291 --rc genhtml_function_coverage=1 00:08:00.291 --rc genhtml_legend=1 00:08:00.291 --rc geninfo_all_blocks=1 00:08:00.291 --rc geninfo_unexecuted_blocks=1 00:08:00.291 00:08:00.291 ' 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.291 --rc genhtml_branch_coverage=1 00:08:00.291 --rc genhtml_function_coverage=1 00:08:00.291 --rc genhtml_legend=1 00:08:00.291 --rc geninfo_all_blocks=1 00:08:00.291 --rc geninfo_unexecuted_blocks=1 00:08:00.291 00:08:00.291 ' 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.291 --rc genhtml_branch_coverage=1 00:08:00.291 --rc genhtml_function_coverage=1 00:08:00.291 --rc genhtml_legend=1 00:08:00.291 --rc geninfo_all_blocks=1 00:08:00.291 --rc geninfo_unexecuted_blocks=1 00:08:00.291 00:08:00.291 ' 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.291 --rc genhtml_branch_coverage=1 00:08:00.291 --rc genhtml_function_coverage=1 00:08:00.291 --rc genhtml_legend=1 00:08:00.291 --rc geninfo_all_blocks=1 00:08:00.291 --rc geninfo_unexecuted_blocks=1 00:08:00.291 00:08:00.291 ' 00:08:00.291 20:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:00.291 20:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58126 00:08:00.291 20:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58126 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58126 ']' 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.291 20:20:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.291 20:20:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:00.291 [2024-11-26 20:20:53.692954] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:00.291 [2024-11-26 20:20:53.693175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58126 ] 00:08:00.550 [2024-11-26 20:20:53.869207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.550 [2024-11-26 20:20:53.996725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.489 20:20:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.489 20:20:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:01.489 20:20:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:01.489 20:20:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:01.489 20:20:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.489 20:20:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:01.489 { 00:08:01.489 "filename": "/tmp/spdk_mem_dump.txt" 00:08:01.489 } 00:08:01.489 20:20:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.489 20:20:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:01.749 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:01.749 1 heaps totaling size 824.000000 MiB 00:08:01.749 size: 824.000000 MiB heap id: 0 00:08:01.749 end heaps---------- 00:08:01.749 9 mempools totaling size 603.782043 MiB 00:08:01.749 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:01.749 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:01.749 size: 100.555481 MiB name: bdev_io_58126 00:08:01.749 size: 50.003479 MiB name: msgpool_58126 00:08:01.749 size: 36.509338 MiB name: fsdev_io_58126 00:08:01.749 size: 21.763794 MiB name: PDU_Pool 00:08:01.749 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:01.749 size: 4.133484 MiB name: evtpool_58126 00:08:01.749 size: 0.026123 MiB name: Session_Pool 00:08:01.749 end mempools------- 00:08:01.749 6 memzones totaling size 4.142822 MiB 00:08:01.749 size: 1.000366 MiB name: RG_ring_0_58126 00:08:01.749 size: 1.000366 MiB name: RG_ring_1_58126 00:08:01.749 size: 1.000366 MiB name: RG_ring_4_58126 00:08:01.749 size: 1.000366 MiB name: RG_ring_5_58126 00:08:01.749 size: 0.125366 MiB name: RG_ring_2_58126 00:08:01.749 size: 0.015991 MiB name: RG_ring_3_58126 00:08:01.749 end memzones------- 00:08:01.749 20:20:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:01.749 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:08:01.749 list of free elements. size: 16.781372 MiB 00:08:01.749 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:01.749 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:01.749 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:01.749 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:01.749 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:01.749 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:01.749 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:01.749 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:01.749 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:01.749 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:01.749 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:01.749 element at address: 0x20001b400000 with size: 0.562927 MiB 00:08:01.749 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:01.749 element at address: 0x200019600000 with size: 0.487976 MiB 00:08:01.749 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:01.749 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:01.749 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:01.749 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:01.749 list of standard malloc elements. size: 199.287720 MiB 00:08:01.749 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:01.749 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:01.749 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:01.749 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:01.749 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:01.749 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:01.749 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:01.749 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:01.749 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:01.749 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:01.749 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:01.749 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:01.749 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:01.750 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:01.751 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:01.751 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:01.751 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:01.751 list of memzone associated elements. size: 607.930908 MiB 00:08:01.751 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:01.751 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:01.751 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:01.751 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:01.751 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:01.751 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58126_0 00:08:01.751 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:01.751 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58126_0 00:08:01.751 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:01.751 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58126_0 00:08:01.751 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:01.751 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:01.751 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:01.751 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:01.751 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:01.752 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58126_0 00:08:01.752 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:01.752 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58126 00:08:01.752 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:01.752 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58126 00:08:01.752 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:01.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:01.752 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:01.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:01.752 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:01.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:01.752 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:01.752 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:01.752 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:01.752 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58126 00:08:01.752 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:01.752 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58126 00:08:01.752 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:01.752 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58126 00:08:01.752 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:01.752 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58126 00:08:01.752 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:01.752 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58126 00:08:01.752 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:01.752 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58126 00:08:01.752 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:01.752 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:01.752 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:01.752 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:01.752 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:01.752 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:01.752 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:01.752 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58126 00:08:01.752 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:01.752 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58126 00:08:01.752 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:01.752 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:01.752 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:01.752 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:01.752 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:01.752 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58126 00:08:01.752 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:01.752 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:01.752 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:01.752 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58126 00:08:01.752 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:01.752 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58126 00:08:01.752 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:01.752 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58126 00:08:01.752 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:01.752 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:01.752 20:20:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:01.752 20:20:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58126 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58126 ']' 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58126 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58126 00:08:01.752 killing process with pid 58126 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58126' 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58126 00:08:01.752 20:20:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58126 00:08:04.288 00:08:04.288 real 0m4.425s 00:08:04.288 user 0m4.372s 00:08:04.288 sys 0m0.609s 00:08:04.288 ************************************ 00:08:04.288 END TEST dpdk_mem_utility 00:08:04.288 ************************************ 00:08:04.288 20:20:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.288 20:20:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:04.288 20:20:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:04.288 20:20:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.288 20:20:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.288 20:20:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.288 ************************************ 00:08:04.288 START TEST event 00:08:04.288 ************************************ 00:08:04.288 20:20:57 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:04.546 * Looking for test storage... 00:08:04.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:04.546 20:20:57 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:04.546 20:20:57 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:04.546 20:20:57 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.546 20:20:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.546 20:20:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.546 20:20:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.546 20:20:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.546 20:20:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.546 20:20:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.546 20:20:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.546 20:20:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.546 20:20:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.546 20:20:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.546 20:20:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.546 20:20:58 event -- scripts/common.sh@344 -- # case "$op" in 00:08:04.546 20:20:58 event -- scripts/common.sh@345 -- # : 1 00:08:04.546 20:20:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.546 20:20:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.546 20:20:58 event -- scripts/common.sh@365 -- # decimal 1 00:08:04.546 20:20:58 event -- scripts/common.sh@353 -- # local d=1 00:08:04.546 20:20:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.546 20:20:58 event -- scripts/common.sh@355 -- # echo 1 00:08:04.546 20:20:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.546 20:20:58 event -- scripts/common.sh@366 -- # decimal 2 00:08:04.546 20:20:58 event -- scripts/common.sh@353 -- # local d=2 00:08:04.546 20:20:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.546 20:20:58 event -- scripts/common.sh@355 -- # echo 2 00:08:04.546 20:20:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.546 20:20:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.546 20:20:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.546 20:20:58 event -- scripts/common.sh@368 -- # return 0 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.546 --rc genhtml_branch_coverage=1 00:08:04.546 --rc genhtml_function_coverage=1 00:08:04.546 --rc genhtml_legend=1 00:08:04.546 --rc geninfo_all_blocks=1 00:08:04.546 --rc geninfo_unexecuted_blocks=1 00:08:04.546 00:08:04.546 ' 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.546 --rc genhtml_branch_coverage=1 00:08:04.546 --rc genhtml_function_coverage=1 00:08:04.546 --rc genhtml_legend=1 00:08:04.546 --rc geninfo_all_blocks=1 00:08:04.546 --rc geninfo_unexecuted_blocks=1 00:08:04.546 00:08:04.546 ' 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.546 --rc genhtml_branch_coverage=1 00:08:04.546 --rc genhtml_function_coverage=1 00:08:04.546 --rc genhtml_legend=1 00:08:04.546 --rc geninfo_all_blocks=1 00:08:04.546 --rc geninfo_unexecuted_blocks=1 00:08:04.546 00:08:04.546 ' 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.546 --rc genhtml_branch_coverage=1 00:08:04.546 --rc genhtml_function_coverage=1 00:08:04.546 --rc genhtml_legend=1 00:08:04.546 --rc geninfo_all_blocks=1 00:08:04.546 --rc geninfo_unexecuted_blocks=1 00:08:04.546 00:08:04.546 ' 00:08:04.546 20:20:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:04.546 20:20:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:04.546 20:20:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:04.546 20:20:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.546 20:20:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:04.546 ************************************ 00:08:04.546 START TEST event_perf 00:08:04.546 ************************************ 00:08:04.546 20:20:58 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:04.810 Running I/O for 1 seconds...[2024-11-26 20:20:58.115161] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:04.810 [2024-11-26 20:20:58.115303] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58240 ] 00:08:04.810 [2024-11-26 20:20:58.297696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.076 [2024-11-26 20:20:58.439560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.076 [2024-11-26 20:20:58.439670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.076 [2024-11-26 20:20:58.439813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.076 [2024-11-26 20:20:58.439848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.451 Running I/O for 1 seconds... 00:08:06.451 lcore 0: 182458 00:08:06.451 lcore 1: 182456 00:08:06.451 lcore 2: 182458 00:08:06.451 lcore 3: 182457 00:08:06.451 done. 00:08:06.451 00:08:06.451 real 0m1.638s 00:08:06.451 user 0m4.389s 00:08:06.451 sys 0m0.122s 00:08:06.451 20:20:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.451 20:20:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.451 ************************************ 00:08:06.451 END TEST event_perf 00:08:06.451 ************************************ 00:08:06.451 20:20:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:06.451 20:20:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:06.451 20:20:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.451 20:20:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:06.451 ************************************ 00:08:06.451 START TEST event_reactor 00:08:06.451 ************************************ 00:08:06.451 20:20:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:06.451 [2024-11-26 20:20:59.815750] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:06.451 [2024-11-26 20:20:59.815959] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58279 ] 00:08:06.451 [2024-11-26 20:20:59.991801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.710 [2024-11-26 20:21:00.115184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.086 test_start 00:08:08.086 oneshot 00:08:08.086 tick 100 00:08:08.086 tick 100 00:08:08.086 tick 250 00:08:08.086 tick 100 00:08:08.086 tick 100 00:08:08.086 tick 100 00:08:08.086 tick 250 00:08:08.086 tick 500 00:08:08.086 tick 100 00:08:08.086 tick 100 00:08:08.086 tick 250 00:08:08.086 tick 100 00:08:08.086 tick 100 00:08:08.086 test_end 00:08:08.086 00:08:08.086 real 0m1.599s 00:08:08.086 user 0m1.395s 00:08:08.086 sys 0m0.095s 00:08:08.086 20:21:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.086 20:21:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 ************************************ 00:08:08.086 END TEST event_reactor 00:08:08.086 ************************************ 00:08:08.086 20:21:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:08.086 20:21:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:08.086 20:21:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.086 20:21:01 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 ************************************ 00:08:08.086 START TEST event_reactor_perf 00:08:08.086 ************************************ 00:08:08.086 20:21:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:08.086 [2024-11-26 20:21:01.466744] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:08.086 [2024-11-26 20:21:01.466967] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:08:08.345 [2024-11-26 20:21:01.651836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.345 [2024-11-26 20:21:01.766696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.799 test_start 00:08:09.799 test_end 00:08:09.799 Performance: 372915 events per second 00:08:09.799 00:08:09.799 real 0m1.577s 00:08:09.799 user 0m1.375s 00:08:09.799 sys 0m0.095s 00:08:09.799 20:21:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.799 20:21:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:09.799 ************************************ 00:08:09.799 END TEST event_reactor_perf 00:08:09.799 ************************************ 00:08:09.799 20:21:03 event -- event/event.sh@49 -- # uname -s 00:08:09.799 20:21:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:09.799 20:21:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:09.799 20:21:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.799 20:21:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.799 20:21:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:09.799 ************************************ 00:08:09.799 START TEST event_scheduler 00:08:09.799 ************************************ 00:08:09.799 20:21:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:09.799 * Looking for test storage... 00:08:09.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:09.799 20:21:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.799 20:21:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.799 20:21:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.799 20:21:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.799 20:21:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.800 20:21:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.800 --rc genhtml_branch_coverage=1 00:08:09.800 --rc genhtml_function_coverage=1 00:08:09.800 --rc genhtml_legend=1 00:08:09.800 --rc geninfo_all_blocks=1 00:08:09.800 --rc geninfo_unexecuted_blocks=1 00:08:09.800 00:08:09.800 ' 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.800 --rc genhtml_branch_coverage=1 00:08:09.800 --rc genhtml_function_coverage=1 00:08:09.800 --rc genhtml_legend=1 00:08:09.800 --rc geninfo_all_blocks=1 00:08:09.800 --rc geninfo_unexecuted_blocks=1 00:08:09.800 00:08:09.800 ' 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.800 --rc genhtml_branch_coverage=1 00:08:09.800 --rc genhtml_function_coverage=1 00:08:09.800 --rc genhtml_legend=1 00:08:09.800 --rc geninfo_all_blocks=1 00:08:09.800 --rc geninfo_unexecuted_blocks=1 00:08:09.800 00:08:09.800 ' 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.800 --rc genhtml_branch_coverage=1 00:08:09.800 --rc genhtml_function_coverage=1 00:08:09.800 --rc genhtml_legend=1 00:08:09.800 --rc geninfo_all_blocks=1 00:08:09.800 --rc geninfo_unexecuted_blocks=1 00:08:09.800 00:08:09.800 ' 00:08:09.800 20:21:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:09.800 20:21:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58392 00:08:09.800 20:21:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:09.800 20:21:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.800 20:21:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58392 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58392 ']' 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.800 20:21:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:10.058 [2024-11-26 20:21:03.367782] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:10.059 [2024-11-26 20:21:03.367995] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58392 ] 00:08:10.059 [2024-11-26 20:21:03.545443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.318 [2024-11-26 20:21:03.667132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.318 [2024-11-26 20:21:03.667304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.318 [2024-11-26 20:21:03.667402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.318 [2024-11-26 20:21:03.667437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:10.925 20:21:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:10.925 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:10.925 POWER: Cannot set governor of lcore 0 to userspace 00:08:10.925 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:10.925 POWER: Cannot set governor of lcore 0 to performance 00:08:10.925 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:10.925 POWER: Cannot set governor of lcore 0 to userspace 00:08:10.925 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:10.925 POWER: Cannot set governor of lcore 0 to userspace 00:08:10.925 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:10.925 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:10.925 POWER: Unable to set Power Management Environment for lcore 0 00:08:10.925 [2024-11-26 20:21:04.260163] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:10.925 [2024-11-26 20:21:04.260190] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:10.925 [2024-11-26 20:21:04.260202] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:10.925 [2024-11-26 20:21:04.260225] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:10.925 [2024-11-26 20:21:04.260234] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:10.925 [2024-11-26 20:21:04.260245] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.925 20:21:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.925 20:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 [2024-11-26 20:21:04.615249] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:11.184 20:21:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:11.184 20:21:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.184 20:21:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 ************************************ 00:08:11.184 START TEST scheduler_create_thread 00:08:11.184 ************************************ 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 2 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 3 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 4 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 5 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 6 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 7 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 8 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 9 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 10 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.184 20:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.563 20:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.563 20:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:12.563 20:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:12.563 20:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.563 20:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.501 20:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.501 20:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:13.501 20:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.501 20:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.436 20:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.436 20:21:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:14.436 20:21:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:14.436 20:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.436 20:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.005 20:21:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.005 00:08:15.005 real 0m3.878s 00:08:15.005 user 0m0.029s 00:08:15.005 ************************************ 00:08:15.005 END TEST scheduler_create_thread 00:08:15.005 ************************************ 00:08:15.005 sys 0m0.008s 00:08:15.005 20:21:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.005 20:21:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.266 20:21:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:15.266 20:21:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58392 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58392 ']' 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58392 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58392 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58392' 00:08:15.266 killing process with pid 58392 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58392 00:08:15.266 20:21:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58392 00:08:15.527 [2024-11-26 20:21:08.882039] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:16.904 00:08:16.904 real 0m7.137s 00:08:16.904 user 0m14.956s 00:08:16.904 sys 0m0.502s 00:08:16.904 20:21:10 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.904 ************************************ 00:08:16.904 END TEST event_scheduler 00:08:16.904 ************************************ 00:08:16.904 20:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:16.904 20:21:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:16.904 20:21:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:16.904 20:21:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.904 20:21:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.904 20:21:10 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.904 ************************************ 00:08:16.904 START TEST app_repeat 00:08:16.904 ************************************ 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58520 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58520' 00:08:16.904 Process app_repeat pid: 58520 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:16.904 spdk_app_start Round 0 00:08:16.904 20:21:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58520 /var/tmp/spdk-nbd.sock 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58520 ']' 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.904 20:21:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:16.905 [2024-11-26 20:21:10.348655] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:16.905 [2024-11-26 20:21:10.348875] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58520 ] 00:08:17.164 [2024-11-26 20:21:10.510613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.164 [2024-11-26 20:21:10.643264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.164 [2024-11-26 20:21:10.643314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.099 20:21:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.099 20:21:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:18.099 20:21:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:18.099 Malloc0 00:08:18.099 20:21:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:18.701 Malloc1 00:08:18.701 20:21:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:18.701 20:21:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.701 20:21:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:18.701 20:21:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:18.701 20:21:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.701 20:21:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:18.701 20:21:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.702 20:21:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:18.702 /dev/nbd0 00:08:18.702 20:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.702 20:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.702 1+0 records in 00:08:18.702 1+0 records out 00:08:18.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436926 s, 9.4 MB/s 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.702 20:21:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:18.702 20:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.702 20:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.702 20:21:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:18.959 /dev/nbd1 00:08:19.217 20:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:19.217 20:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:19.217 1+0 records in 00:08:19.217 1+0 records out 00:08:19.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340282 s, 12.0 MB/s 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:19.217 20:21:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:19.217 20:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:19.217 20:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:19.217 20:21:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.217 20:21:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.218 20:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:19.477 { 00:08:19.477 "nbd_device": "/dev/nbd0", 00:08:19.477 "bdev_name": "Malloc0" 00:08:19.477 }, 00:08:19.477 { 00:08:19.477 "nbd_device": "/dev/nbd1", 00:08:19.477 "bdev_name": "Malloc1" 00:08:19.477 } 00:08:19.477 ]' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:19.477 { 00:08:19.477 "nbd_device": "/dev/nbd0", 00:08:19.477 "bdev_name": "Malloc0" 00:08:19.477 }, 00:08:19.477 { 00:08:19.477 "nbd_device": "/dev/nbd1", 00:08:19.477 "bdev_name": "Malloc1" 00:08:19.477 } 00:08:19.477 ]' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:19.477 /dev/nbd1' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:19.477 /dev/nbd1' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:19.477 256+0 records in 00:08:19.477 256+0 records out 00:08:19.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685462 s, 153 MB/s 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:19.477 256+0 records in 00:08:19.477 256+0 records out 00:08:19.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213943 s, 49.0 MB/s 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:19.477 256+0 records in 00:08:19.477 256+0 records out 00:08:19.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286257 s, 36.6 MB/s 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.477 20:21:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.737 20:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.996 20:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:20.255 20:21:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:20.255 20:21:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:20.823 20:21:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:22.204 [2024-11-26 20:21:15.451536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.204 [2024-11-26 20:21:15.575262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.204 [2024-11-26 20:21:15.575305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.463 [2024-11-26 20:21:15.787250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:22.463 [2024-11-26 20:21:15.787336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:23.905 20:21:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:23.905 spdk_app_start Round 1 00:08:23.905 20:21:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:23.905 20:21:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58520 /var/tmp/spdk-nbd.sock 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58520 ']' 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.905 20:21:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:23.905 20:21:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.474 Malloc0 00:08:24.474 20:21:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.735 Malloc1 00:08:24.735 20:21:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.735 20:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.994 /dev/nbd0 00:08:24.994 20:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.994 20:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.994 1+0 records in 00:08:24.994 1+0 records out 00:08:24.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023948 s, 17.1 MB/s 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.994 20:21:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:24.994 20:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.994 20:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.994 20:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.253 /dev/nbd1 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.253 1+0 records in 00:08:25.253 1+0 records out 00:08:25.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275056 s, 14.9 MB/s 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.253 20:21:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.253 20:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.512 20:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.512 { 00:08:25.512 "nbd_device": "/dev/nbd0", 00:08:25.512 "bdev_name": "Malloc0" 00:08:25.512 }, 00:08:25.512 { 00:08:25.512 "nbd_device": "/dev/nbd1", 00:08:25.512 "bdev_name": "Malloc1" 00:08:25.512 } 00:08:25.512 ]' 00:08:25.512 20:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.512 20:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.512 { 00:08:25.512 "nbd_device": "/dev/nbd0", 00:08:25.512 "bdev_name": "Malloc0" 00:08:25.512 }, 00:08:25.512 { 00:08:25.512 "nbd_device": "/dev/nbd1", 00:08:25.512 "bdev_name": "Malloc1" 00:08:25.512 } 00:08:25.512 ]' 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.512 /dev/nbd1' 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.512 /dev/nbd1' 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.512 256+0 records in 00:08:25.512 256+0 records out 00:08:25.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613913 s, 171 MB/s 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.512 256+0 records in 00:08:25.512 256+0 records out 00:08:25.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210365 s, 49.8 MB/s 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.512 20:21:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.775 256+0 records in 00:08:25.775 256+0 records out 00:08:25.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314245 s, 33.4 MB/s 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.775 20:21:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.036 20:21:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.294 20:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.552 20:21:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.552 20:21:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.118 20:21:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:28.497 [2024-11-26 20:21:21.759122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.497 [2024-11-26 20:21:21.889154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.497 [2024-11-26 20:21:21.889178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.766 [2024-11-26 20:21:22.108088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:28.766 [2024-11-26 20:21:22.108211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.139 spdk_app_start Round 2 00:08:30.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.139 20:21:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:30.139 20:21:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:30.139 20:21:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58520 /var/tmp/spdk-nbd.sock 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58520 ']' 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.139 20:21:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:30.139 20:21:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.395 Malloc0 00:08:30.653 20:21:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.935 Malloc1 00:08:30.935 20:21:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.935 20:21:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.936 20:21:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:30.936 20:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.936 20:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.936 20:21:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.194 /dev/nbd0 00:08:31.194 20:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.194 20:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.194 1+0 records in 00:08:31.194 1+0 records out 00:08:31.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255471 s, 16.0 MB/s 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:31.194 20:21:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:31.194 20:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.194 20:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.194 20:21:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:31.452 /dev/nbd1 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.452 1+0 records in 00:08:31.452 1+0 records out 00:08:31.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350462 s, 11.7 MB/s 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:31.452 20:21:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.452 20:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.710 { 00:08:31.710 "nbd_device": "/dev/nbd0", 00:08:31.710 "bdev_name": "Malloc0" 00:08:31.710 }, 00:08:31.710 { 00:08:31.710 "nbd_device": "/dev/nbd1", 00:08:31.710 "bdev_name": "Malloc1" 00:08:31.710 } 00:08:31.710 ]' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.710 { 00:08:31.710 "nbd_device": "/dev/nbd0", 00:08:31.710 "bdev_name": "Malloc0" 00:08:31.710 }, 00:08:31.710 { 00:08:31.710 "nbd_device": "/dev/nbd1", 00:08:31.710 "bdev_name": "Malloc1" 00:08:31.710 } 00:08:31.710 ]' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.710 /dev/nbd1' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.710 /dev/nbd1' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.710 256+0 records in 00:08:31.710 256+0 records out 00:08:31.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129654 s, 80.9 MB/s 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.710 20:21:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.966 256+0 records in 00:08:31.966 256+0 records out 00:08:31.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193256 s, 54.3 MB/s 00:08:31.966 20:21:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.966 20:21:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.966 256+0 records in 00:08:31.966 256+0 records out 00:08:31.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268895 s, 39.0 MB/s 00:08:31.966 20:21:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.966 20:21:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.967 20:21:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.224 20:21:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.483 20:21:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.741 20:21:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.741 20:21:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:33.307 20:21:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:34.683 [2024-11-26 20:21:28.092374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.683 [2024-11-26 20:21:28.228306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.683 [2024-11-26 20:21:28.228310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.942 [2024-11-26 20:21:28.474470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:34.942 [2024-11-26 20:21:28.474589] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.319 20:21:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58520 /var/tmp/spdk-nbd.sock 00:08:36.319 20:21:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58520 ']' 00:08:36.319 20:21:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.319 20:21:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.319 20:21:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.319 20:21:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.319 20:21:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:36.578 20:21:29 event.app_repeat -- event/event.sh@39 -- # killprocess 58520 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58520 ']' 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58520 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.578 20:21:29 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58520 00:08:36.578 killing process with pid 58520 00:08:36.578 20:21:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.578 20:21:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.578 20:21:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58520' 00:08:36.578 20:21:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58520 00:08:36.578 20:21:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58520 00:08:37.951 spdk_app_start is called in Round 0. 00:08:37.951 Shutdown signal received, stop current app iteration 00:08:37.951 Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 reinitialization... 00:08:37.951 spdk_app_start is called in Round 1. 00:08:37.951 Shutdown signal received, stop current app iteration 00:08:37.951 Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 reinitialization... 00:08:37.951 spdk_app_start is called in Round 2. 00:08:37.951 Shutdown signal received, stop current app iteration 00:08:37.951 Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 reinitialization... 00:08:37.951 spdk_app_start is called in Round 3. 00:08:37.951 Shutdown signal received, stop current app iteration 00:08:37.951 ************************************ 00:08:37.951 END TEST app_repeat 00:08:37.951 ************************************ 00:08:37.951 20:21:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:37.951 20:21:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:37.951 00:08:37.951 real 0m21.014s 00:08:37.951 user 0m45.665s 00:08:37.951 sys 0m2.926s 00:08:37.951 20:21:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.951 20:21:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:37.951 20:21:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:37.951 20:21:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:37.951 20:21:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.951 20:21:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.951 20:21:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:37.951 ************************************ 00:08:37.951 START TEST cpu_locks 00:08:37.951 ************************************ 00:08:37.951 20:21:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:37.951 * Looking for test storage... 00:08:37.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:37.951 20:21:31 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.951 20:21:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.951 20:21:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.211 20:21:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.211 20:21:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.212 20:21:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.212 20:21:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.212 --rc genhtml_branch_coverage=1 00:08:38.212 --rc genhtml_function_coverage=1 00:08:38.212 --rc genhtml_legend=1 00:08:38.212 --rc geninfo_all_blocks=1 00:08:38.212 --rc geninfo_unexecuted_blocks=1 00:08:38.212 00:08:38.212 ' 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.212 --rc genhtml_branch_coverage=1 00:08:38.212 --rc genhtml_function_coverage=1 00:08:38.212 --rc genhtml_legend=1 00:08:38.212 --rc geninfo_all_blocks=1 00:08:38.212 --rc geninfo_unexecuted_blocks=1 00:08:38.212 00:08:38.212 ' 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.212 --rc genhtml_branch_coverage=1 00:08:38.212 --rc genhtml_function_coverage=1 00:08:38.212 --rc genhtml_legend=1 00:08:38.212 --rc geninfo_all_blocks=1 00:08:38.212 --rc geninfo_unexecuted_blocks=1 00:08:38.212 00:08:38.212 ' 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.212 --rc genhtml_branch_coverage=1 00:08:38.212 --rc genhtml_function_coverage=1 00:08:38.212 --rc genhtml_legend=1 00:08:38.212 --rc geninfo_all_blocks=1 00:08:38.212 --rc geninfo_unexecuted_blocks=1 00:08:38.212 00:08:38.212 ' 00:08:38.212 20:21:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:38.212 20:21:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:38.212 20:21:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:38.212 20:21:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.212 20:21:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.212 ************************************ 00:08:38.212 START TEST default_locks 00:08:38.212 ************************************ 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58980 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58980 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.212 20:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.212 [2024-11-26 20:21:31.721492] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:38.212 [2024-11-26 20:21:31.721635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:08:38.471 [2024-11-26 20:21:31.900486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.729 [2024-11-26 20:21:32.036295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.666 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.666 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:39.666 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58980 00:08:39.666 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:39.666 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58980 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58980 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58980 ']' 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58980 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58980 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.924 killing process with pid 58980 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58980' 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58980 00:08:39.924 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58980 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58980 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58980 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58980 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:08:43.208 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:43.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58980) - No such process 00:08:43.209 ERROR: process (pid: 58980) is no longer running 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:43.209 00:08:43.209 real 0m4.759s 00:08:43.209 user 0m4.815s 00:08:43.209 sys 0m0.633s 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.209 20:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:43.209 ************************************ 00:08:43.209 END TEST default_locks 00:08:43.209 ************************************ 00:08:43.209 20:21:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:43.209 20:21:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.209 20:21:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.209 20:21:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:43.209 ************************************ 00:08:43.209 START TEST default_locks_via_rpc 00:08:43.209 ************************************ 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59061 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59061 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59061 ']' 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.209 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.209 [2024-11-26 20:21:36.514188] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:43.209 [2024-11-26 20:21:36.514358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59061 ] 00:08:43.209 [2024-11-26 20:21:36.692937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.467 [2024-11-26 20:21:36.831221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59061 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:44.404 20:21:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59061 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59061 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59061 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.972 killing process with pid 59061 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59061 00:08:44.972 20:21:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59061 00:08:48.333 00:08:48.333 real 0m4.876s 00:08:48.333 user 0m4.916s 00:08:48.333 sys 0m0.706s 00:08:48.333 20:21:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.333 20:21:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.333 ************************************ 00:08:48.333 END TEST default_locks_via_rpc 00:08:48.333 ************************************ 00:08:48.333 20:21:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:48.333 20:21:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.333 20:21:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.333 20:21:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:48.333 ************************************ 00:08:48.333 START TEST non_locking_app_on_locked_coremask 00:08:48.333 ************************************ 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59146 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59146 /var/tmp/spdk.sock 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59146 ']' 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.333 20:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:48.333 [2024-11-26 20:21:41.434485] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:48.333 [2024-11-26 20:21:41.434630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:08:48.334 [2024-11-26 20:21:41.598277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.334 [2024-11-26 20:21:41.739731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59162 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59162 /var/tmp/spdk2.sock 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59162 ']' 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.299 20:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.558 [2024-11-26 20:21:42.863829] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:08:49.558 [2024-11-26 20:21:42.863961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59162 ] 00:08:49.558 [2024-11-26 20:21:43.052490] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:49.558 [2024-11-26 20:21:43.052577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.817 [2024-11-26 20:21:43.335847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.355 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.355 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:52.355 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59146 00:08:52.355 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59146 00:08:52.355 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:52.615 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59146 00:08:52.615 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59146 ']' 00:08:52.615 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59146 00:08:52.615 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:52.615 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.615 20:21:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59146 00:08:52.615 20:21:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.615 20:21:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.615 20:21:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59146' 00:08:52.615 killing process with pid 59146 00:08:52.615 20:21:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59146 00:08:52.615 20:21:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59146 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59162 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59162 ']' 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59162 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59162 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59162' 00:08:59.188 killing process with pid 59162 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59162 00:08:59.188 20:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59162 00:09:01.094 00:09:01.094 real 0m12.864s 00:09:01.094 user 0m13.356s 00:09:01.094 sys 0m1.259s 00:09:01.094 20:21:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.094 20:21:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.094 ************************************ 00:09:01.094 END TEST non_locking_app_on_locked_coremask 00:09:01.094 ************************************ 00:09:01.094 20:21:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:01.094 20:21:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.094 20:21:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.094 20:21:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.094 ************************************ 00:09:01.094 START TEST locking_app_on_unlocked_coremask 00:09:01.094 ************************************ 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59332 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59332 /var/tmp/spdk.sock 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59332 ']' 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.094 20:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.094 [2024-11-26 20:21:54.365646] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:01.094 [2024-11-26 20:21:54.365785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59332 ] 00:09:01.094 [2024-11-26 20:21:54.544265] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:01.094 [2024-11-26 20:21:54.544347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.354 [2024-11-26 20:21:54.671013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59352 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59352 /var/tmp/spdk2.sock 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59352 ']' 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.291 20:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.291 [2024-11-26 20:21:55.750891] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:02.291 [2024-11-26 20:21:55.751020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59352 ] 00:09:02.550 [2024-11-26 20:21:55.931096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.809 [2024-11-26 20:21:56.194877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.347 20:21:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.347 20:21:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:05.347 20:21:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59352 00:09:05.347 20:21:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:05.347 20:21:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59352 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59332 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59332 ']' 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59332 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59332 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.606 killing process with pid 59332 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59332' 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59332 00:09:05.606 20:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59332 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59352 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59352 ']' 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59352 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59352 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.196 killing process with pid 59352 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59352' 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59352 00:09:12.196 20:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59352 00:09:13.571 00:09:13.571 real 0m12.866s 00:09:13.571 user 0m13.225s 00:09:13.571 sys 0m1.345s 00:09:13.571 20:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.839 ************************************ 00:09:13.839 END TEST locking_app_on_unlocked_coremask 00:09:13.839 ************************************ 00:09:13.839 20:22:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:13.839 20:22:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.839 20:22:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.839 20:22:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:13.839 ************************************ 00:09:13.839 START TEST locking_app_on_locked_coremask 00:09:13.839 ************************************ 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59507 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59507 /var/tmp/spdk.sock 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59507 ']' 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.839 20:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.839 [2024-11-26 20:22:07.293391] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:13.839 [2024-11-26 20:22:07.293533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59507 ] 00:09:14.096 [2024-11-26 20:22:07.469735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.096 [2024-11-26 20:22:07.590085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59527 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59527 /var/tmp/spdk2.sock 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59527 /var/tmp/spdk2.sock 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59527 /var/tmp/spdk2.sock 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59527 ']' 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.028 20:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:15.028 [2024-11-26 20:22:08.577313] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:15.028 [2024-11-26 20:22:08.577433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59527 ] 00:09:15.286 [2024-11-26 20:22:08.750205] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59507 has claimed it. 00:09:15.286 [2024-11-26 20:22:08.754347] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:15.850 ERROR: process (pid: 59527) is no longer running 00:09:15.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59527) - No such process 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59507 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59507 00:09:15.850 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59507 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59507 ']' 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59507 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59507 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.107 killing process with pid 59507 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59507' 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59507 00:09:16.107 20:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59507 00:09:19.430 00:09:19.430 real 0m5.117s 00:09:19.430 user 0m5.318s 00:09:19.430 sys 0m0.710s 00:09:19.430 20:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.430 20:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.430 ************************************ 00:09:19.430 END TEST locking_app_on_locked_coremask 00:09:19.430 ************************************ 00:09:19.430 20:22:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:19.430 20:22:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.430 20:22:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.430 20:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.430 ************************************ 00:09:19.430 START TEST locking_overlapped_coremask 00:09:19.430 ************************************ 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59598 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59598 /var/tmp/spdk.sock 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59598 ']' 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.430 20:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.430 [2024-11-26 20:22:12.474209] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:19.430 [2024-11-26 20:22:12.474352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:09:19.431 [2024-11-26 20:22:12.654211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.431 [2024-11-26 20:22:12.788823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.431 [2024-11-26 20:22:12.788971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.431 [2024-11-26 20:22:12.789021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59626 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59626 /var/tmp/spdk2.sock 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59626 /var/tmp/spdk2.sock 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59626 /var/tmp/spdk2.sock 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59626 ']' 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.381 20:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.382 [2024-11-26 20:22:13.869144] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:20.382 [2024-11-26 20:22:13.869316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:09:20.642 [2024-11-26 20:22:14.052858] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59598 has claimed it. 00:09:20.642 [2024-11-26 20:22:14.052936] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:21.214 ERROR: process (pid: 59626) is no longer running 00:09:21.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59626) - No such process 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59598 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59598 ']' 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59598 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59598 00:09:21.214 killing process with pid 59598 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59598' 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59598 00:09:21.214 20:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59598 00:09:24.532 ************************************ 00:09:24.532 END TEST locking_overlapped_coremask 00:09:24.532 ************************************ 00:09:24.532 00:09:24.532 real 0m5.072s 00:09:24.532 user 0m13.898s 00:09:24.532 sys 0m0.564s 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.532 20:22:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:24.532 20:22:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.532 20:22:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.532 20:22:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.532 ************************************ 00:09:24.532 START TEST locking_overlapped_coremask_via_rpc 00:09:24.532 ************************************ 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59691 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59691 /var/tmp/spdk.sock 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59691 ']' 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.532 20:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.532 [2024-11-26 20:22:17.599665] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:24.532 [2024-11-26 20:22:17.599873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:09:24.532 [2024-11-26 20:22:17.766592] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:24.532 [2024-11-26 20:22:17.766659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.532 [2024-11-26 20:22:17.904893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.532 [2024-11-26 20:22:17.905036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.532 [2024-11-26 20:22:17.905070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59715 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59715 /var/tmp/spdk2.sock 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59715 ']' 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:25.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.548 20:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.548 [2024-11-26 20:22:19.013929] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:25.548 [2024-11-26 20:22:19.014065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59715 ] 00:09:25.805 [2024-11-26 20:22:19.195232] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:25.805 [2024-11-26 20:22:19.195313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.062 [2024-11-26 20:22:19.474877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.062 [2024-11-26 20:22:19.474968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.062 [2024-11-26 20:22:19.475003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:28.591 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.592 [2024-11-26 20:22:21.705517] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59691 has claimed it. 00:09:28.592 request: 00:09:28.592 { 00:09:28.592 "method": "framework_enable_cpumask_locks", 00:09:28.592 "req_id": 1 00:09:28.592 } 00:09:28.592 Got JSON-RPC error response 00:09:28.592 response: 00:09:28.592 { 00:09:28.592 "code": -32603, 00:09:28.592 "message": "Failed to claim CPU core: 2" 00:09:28.592 } 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59691 /var/tmp/spdk.sock 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59691 ']' 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.592 20:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59715 /var/tmp/spdk2.sock 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59715 ']' 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.592 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:28.850 00:09:28.850 real 0m4.838s 00:09:28.850 user 0m1.633s 00:09:28.850 sys 0m0.210s 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.850 20:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.850 ************************************ 00:09:28.850 END TEST locking_overlapped_coremask_via_rpc 00:09:28.850 ************************************ 00:09:28.850 20:22:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:28.850 20:22:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59691 ]] 00:09:28.850 20:22:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59691 00:09:28.850 20:22:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59691 ']' 00:09:28.850 20:22:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59691 00:09:28.850 20:22:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:28.850 20:22:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.850 20:22:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59691 00:09:28.851 20:22:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.851 killing process with pid 59691 00:09:28.851 20:22:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.851 20:22:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59691' 00:09:28.851 20:22:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59691 00:09:28.851 20:22:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59691 00:09:32.198 20:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59715 ]] 00:09:32.198 20:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59715 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59715 ']' 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59715 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59715 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:32.198 killing process with pid 59715 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59715' 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59715 00:09:32.198 20:22:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59715 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59691 ]] 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59691 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59691 ']' 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59691 00:09:34.740 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59691) - No such process 00:09:34.740 Process with pid 59691 is not found 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59691 is not found' 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59715 ]] 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59715 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59715 ']' 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59715 00:09:34.740 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59715) - No such process 00:09:34.740 Process with pid 59715 is not found 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59715 is not found' 00:09:34.740 20:22:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:34.740 00:09:34.740 real 0m56.801s 00:09:34.740 user 1m38.269s 00:09:34.740 sys 0m6.628s 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.740 20:22:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.740 ************************************ 00:09:34.740 END TEST cpu_locks 00:09:34.740 ************************************ 00:09:34.740 00:09:34.740 real 1m30.391s 00:09:34.740 user 2m46.298s 00:09:34.740 sys 0m10.753s 00:09:34.740 20:22:28 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.740 20:22:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:34.740 ************************************ 00:09:34.740 END TEST event 00:09:34.740 ************************************ 00:09:34.740 20:22:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:34.740 20:22:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.740 20:22:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.740 20:22:28 -- common/autotest_common.sh@10 -- # set +x 00:09:34.740 ************************************ 00:09:34.740 START TEST thread 00:09:34.740 ************************************ 00:09:34.740 20:22:28 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:34.999 * Looking for test storage... 00:09:34.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:34.999 20:22:28 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.999 20:22:28 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.999 20:22:28 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.999 20:22:28 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.999 20:22:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.999 20:22:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.999 20:22:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.999 20:22:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.999 20:22:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.999 20:22:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.999 20:22:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.999 20:22:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.999 20:22:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.999 20:22:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.999 20:22:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.000 20:22:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:35.000 20:22:28 thread -- scripts/common.sh@345 -- # : 1 00:09:35.000 20:22:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.000 20:22:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.000 20:22:28 thread -- scripts/common.sh@365 -- # decimal 1 00:09:35.000 20:22:28 thread -- scripts/common.sh@353 -- # local d=1 00:09:35.000 20:22:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.000 20:22:28 thread -- scripts/common.sh@355 -- # echo 1 00:09:35.000 20:22:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.000 20:22:28 thread -- scripts/common.sh@366 -- # decimal 2 00:09:35.000 20:22:28 thread -- scripts/common.sh@353 -- # local d=2 00:09:35.000 20:22:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.000 20:22:28 thread -- scripts/common.sh@355 -- # echo 2 00:09:35.000 20:22:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.000 20:22:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.000 20:22:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.000 20:22:28 thread -- scripts/common.sh@368 -- # return 0 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.000 --rc genhtml_branch_coverage=1 00:09:35.000 --rc genhtml_function_coverage=1 00:09:35.000 --rc genhtml_legend=1 00:09:35.000 --rc geninfo_all_blocks=1 00:09:35.000 --rc geninfo_unexecuted_blocks=1 00:09:35.000 00:09:35.000 ' 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.000 --rc genhtml_branch_coverage=1 00:09:35.000 --rc genhtml_function_coverage=1 00:09:35.000 --rc genhtml_legend=1 00:09:35.000 --rc geninfo_all_blocks=1 00:09:35.000 --rc geninfo_unexecuted_blocks=1 00:09:35.000 00:09:35.000 ' 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.000 --rc genhtml_branch_coverage=1 00:09:35.000 --rc genhtml_function_coverage=1 00:09:35.000 --rc genhtml_legend=1 00:09:35.000 --rc geninfo_all_blocks=1 00:09:35.000 --rc geninfo_unexecuted_blocks=1 00:09:35.000 00:09:35.000 ' 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.000 --rc genhtml_branch_coverage=1 00:09:35.000 --rc genhtml_function_coverage=1 00:09:35.000 --rc genhtml_legend=1 00:09:35.000 --rc geninfo_all_blocks=1 00:09:35.000 --rc geninfo_unexecuted_blocks=1 00:09:35.000 00:09:35.000 ' 00:09:35.000 20:22:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.000 20:22:28 thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 ************************************ 00:09:35.000 START TEST thread_poller_perf 00:09:35.000 ************************************ 00:09:35.000 20:22:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:35.000 [2024-11-26 20:22:28.538609] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:35.000 [2024-11-26 20:22:28.538721] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59921 ] 00:09:35.259 [2024-11-26 20:22:28.714647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.520 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:35.520 [2024-11-26 20:22:28.833430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.901 [2024-11-26T20:22:30.456Z] ====================================== 00:09:36.901 [2024-11-26T20:22:30.456Z] busy:2296607642 (cyc) 00:09:36.901 [2024-11-26T20:22:30.456Z] total_run_count: 348000 00:09:36.901 [2024-11-26T20:22:30.456Z] tsc_hz: 2290000000 (cyc) 00:09:36.901 [2024-11-26T20:22:30.456Z] ====================================== 00:09:36.901 [2024-11-26T20:22:30.456Z] poller_cost: 6599 (cyc), 2881 (nsec) 00:09:36.901 00:09:36.901 real 0m1.608s 00:09:36.901 user 0m1.404s 00:09:36.901 sys 0m0.095s 00:09:36.901 20:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.901 20:22:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.901 ************************************ 00:09:36.901 END TEST thread_poller_perf 00:09:36.901 ************************************ 00:09:36.901 20:22:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:36.901 20:22:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:36.901 20:22:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.901 20:22:30 thread -- common/autotest_common.sh@10 -- # set +x 00:09:36.901 ************************************ 00:09:36.901 START TEST thread_poller_perf 00:09:36.901 ************************************ 00:09:36.901 20:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:36.901 [2024-11-26 20:22:30.208917] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:36.901 [2024-11-26 20:22:30.209031] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:09:36.901 [2024-11-26 20:22:30.385044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.159 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:37.159 [2024-11-26 20:22:30.507827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.538 [2024-11-26T20:22:32.093Z] ====================================== 00:09:38.538 [2024-11-26T20:22:32.093Z] busy:2293497270 (cyc) 00:09:38.538 [2024-11-26T20:22:32.093Z] total_run_count: 4595000 00:09:38.538 [2024-11-26T20:22:32.093Z] tsc_hz: 2290000000 (cyc) 00:09:38.538 [2024-11-26T20:22:32.093Z] ====================================== 00:09:38.538 [2024-11-26T20:22:32.093Z] poller_cost: 499 (cyc), 217 (nsec) 00:09:38.538 00:09:38.538 real 0m1.619s 00:09:38.538 user 0m1.404s 00:09:38.538 sys 0m0.106s 00:09:38.538 20:22:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.538 20:22:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:38.538 ************************************ 00:09:38.538 END TEST thread_poller_perf 00:09:38.538 ************************************ 00:09:38.538 20:22:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:38.538 00:09:38.538 real 0m3.565s 00:09:38.538 user 0m2.960s 00:09:38.538 sys 0m0.400s 00:09:38.538 20:22:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.538 20:22:31 thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.538 ************************************ 00:09:38.538 END TEST thread 00:09:38.538 ************************************ 00:09:38.538 20:22:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:38.538 20:22:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:38.538 20:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.538 20:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.538 20:22:31 -- common/autotest_common.sh@10 -- # set +x 00:09:38.538 ************************************ 00:09:38.538 START TEST app_cmdline 00:09:38.538 ************************************ 00:09:38.538 20:22:31 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:38.538 * Looking for test storage... 00:09:38.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:38.538 20:22:32 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.538 20:22:32 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.538 20:22:32 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.538 20:22:32 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.538 20:22:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.798 20:22:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:38.798 20:22:32 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.798 20:22:32 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.798 --rc genhtml_branch_coverage=1 00:09:38.799 --rc genhtml_function_coverage=1 00:09:38.799 --rc genhtml_legend=1 00:09:38.799 --rc geninfo_all_blocks=1 00:09:38.799 --rc geninfo_unexecuted_blocks=1 00:09:38.799 00:09:38.799 ' 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.799 --rc genhtml_branch_coverage=1 00:09:38.799 --rc genhtml_function_coverage=1 00:09:38.799 --rc genhtml_legend=1 00:09:38.799 --rc geninfo_all_blocks=1 00:09:38.799 --rc geninfo_unexecuted_blocks=1 00:09:38.799 00:09:38.799 ' 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.799 --rc genhtml_branch_coverage=1 00:09:38.799 --rc genhtml_function_coverage=1 00:09:38.799 --rc genhtml_legend=1 00:09:38.799 --rc geninfo_all_blocks=1 00:09:38.799 --rc geninfo_unexecuted_blocks=1 00:09:38.799 00:09:38.799 ' 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.799 --rc genhtml_branch_coverage=1 00:09:38.799 --rc genhtml_function_coverage=1 00:09:38.799 --rc genhtml_legend=1 00:09:38.799 --rc geninfo_all_blocks=1 00:09:38.799 --rc geninfo_unexecuted_blocks=1 00:09:38.799 00:09:38.799 ' 00:09:38.799 20:22:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:38.799 20:22:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60046 00:09:38.799 20:22:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:38.799 20:22:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60046 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60046 ']' 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.799 20:22:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 [2024-11-26 20:22:32.213711] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:38.799 [2024-11-26 20:22:32.214270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60046 ] 00:09:39.058 [2024-11-26 20:22:32.391798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.058 [2024-11-26 20:22:32.517447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.994 20:22:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.995 20:22:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:39.995 20:22:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:40.275 { 00:09:40.275 "version": "SPDK v25.01-pre git sha1 0836dccda", 00:09:40.275 "fields": { 00:09:40.275 "major": 25, 00:09:40.275 "minor": 1, 00:09:40.275 "patch": 0, 00:09:40.275 "suffix": "-pre", 00:09:40.275 "commit": "0836dccda" 00:09:40.275 } 00:09:40.275 } 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:40.275 20:22:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:40.275 20:22:33 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:40.534 request: 00:09:40.534 { 00:09:40.534 "method": "env_dpdk_get_mem_stats", 00:09:40.534 "req_id": 1 00:09:40.534 } 00:09:40.534 Got JSON-RPC error response 00:09:40.534 response: 00:09:40.534 { 00:09:40.534 "code": -32601, 00:09:40.534 "message": "Method not found" 00:09:40.534 } 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.534 20:22:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60046 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60046 ']' 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60046 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60046 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60046' 00:09:40.534 killing process with pid 60046 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@973 -- # kill 60046 00:09:40.534 20:22:34 app_cmdline -- common/autotest_common.sh@978 -- # wait 60046 00:09:43.824 ************************************ 00:09:43.824 END TEST app_cmdline 00:09:43.824 ************************************ 00:09:43.824 00:09:43.824 real 0m4.966s 00:09:43.824 user 0m5.281s 00:09:43.824 sys 0m0.616s 00:09:43.824 20:22:36 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.824 20:22:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:43.824 20:22:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:43.824 20:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.824 20:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.824 20:22:36 -- common/autotest_common.sh@10 -- # set +x 00:09:43.824 ************************************ 00:09:43.824 START TEST version 00:09:43.824 ************************************ 00:09:43.824 20:22:36 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:43.824 * Looking for test storage... 00:09:43.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.824 20:22:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.824 20:22:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.824 20:22:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.824 20:22:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.824 20:22:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.824 20:22:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.824 20:22:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.824 20:22:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.824 20:22:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.824 20:22:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.824 20:22:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.824 20:22:37 version -- scripts/common.sh@344 -- # case "$op" in 00:09:43.824 20:22:37 version -- scripts/common.sh@345 -- # : 1 00:09:43.824 20:22:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.824 20:22:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.824 20:22:37 version -- scripts/common.sh@365 -- # decimal 1 00:09:43.824 20:22:37 version -- scripts/common.sh@353 -- # local d=1 00:09:43.824 20:22:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.824 20:22:37 version -- scripts/common.sh@355 -- # echo 1 00:09:43.824 20:22:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.824 20:22:37 version -- scripts/common.sh@366 -- # decimal 2 00:09:43.824 20:22:37 version -- scripts/common.sh@353 -- # local d=2 00:09:43.824 20:22:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.824 20:22:37 version -- scripts/common.sh@355 -- # echo 2 00:09:43.824 20:22:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.824 20:22:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.824 20:22:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.824 20:22:37 version -- scripts/common.sh@368 -- # return 0 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.824 --rc genhtml_branch_coverage=1 00:09:43.824 --rc genhtml_function_coverage=1 00:09:43.824 --rc genhtml_legend=1 00:09:43.824 --rc geninfo_all_blocks=1 00:09:43.824 --rc geninfo_unexecuted_blocks=1 00:09:43.824 00:09:43.824 ' 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.824 --rc genhtml_branch_coverage=1 00:09:43.824 --rc genhtml_function_coverage=1 00:09:43.824 --rc genhtml_legend=1 00:09:43.824 --rc geninfo_all_blocks=1 00:09:43.824 --rc geninfo_unexecuted_blocks=1 00:09:43.824 00:09:43.824 ' 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.824 --rc genhtml_branch_coverage=1 00:09:43.824 --rc genhtml_function_coverage=1 00:09:43.824 --rc genhtml_legend=1 00:09:43.824 --rc geninfo_all_blocks=1 00:09:43.824 --rc geninfo_unexecuted_blocks=1 00:09:43.824 00:09:43.824 ' 00:09:43.824 20:22:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.824 --rc genhtml_branch_coverage=1 00:09:43.824 --rc genhtml_function_coverage=1 00:09:43.824 --rc genhtml_legend=1 00:09:43.824 --rc geninfo_all_blocks=1 00:09:43.824 --rc geninfo_unexecuted_blocks=1 00:09:43.824 00:09:43.824 ' 00:09:43.824 20:22:37 version -- app/version.sh@17 -- # get_header_version major 00:09:43.824 20:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:43.824 20:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:43.824 20:22:37 version -- app/version.sh@14 -- # cut -f2 00:09:43.825 20:22:37 version -- app/version.sh@17 -- # major=25 00:09:43.825 20:22:37 version -- app/version.sh@18 -- # get_header_version minor 00:09:43.825 20:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:43.825 20:22:37 version -- app/version.sh@14 -- # cut -f2 00:09:43.825 20:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:43.825 20:22:37 version -- app/version.sh@18 -- # minor=1 00:09:43.825 20:22:37 version -- app/version.sh@19 -- # get_header_version patch 00:09:43.825 20:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:43.825 20:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:43.825 20:22:37 version -- app/version.sh@14 -- # cut -f2 00:09:43.825 20:22:37 version -- app/version.sh@19 -- # patch=0 00:09:43.825 20:22:37 version -- app/version.sh@20 -- # get_header_version suffix 00:09:43.825 20:22:37 version -- app/version.sh@14 -- # cut -f2 00:09:43.825 20:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:43.825 20:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:43.825 20:22:37 version -- app/version.sh@20 -- # suffix=-pre 00:09:43.825 20:22:37 version -- app/version.sh@22 -- # version=25.1 00:09:43.825 20:22:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:43.825 20:22:37 version -- app/version.sh@28 -- # version=25.1rc0 00:09:43.825 20:22:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:43.825 20:22:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:43.825 20:22:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:43.825 20:22:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:43.825 00:09:43.825 real 0m0.293s 00:09:43.825 user 0m0.207s 00:09:43.825 sys 0m0.124s 00:09:43.825 ************************************ 00:09:43.825 END TEST version 00:09:43.825 ************************************ 00:09:43.825 20:22:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.825 20:22:37 version -- common/autotest_common.sh@10 -- # set +x 00:09:43.825 20:22:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:43.825 20:22:37 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:43.825 20:22:37 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:43.825 20:22:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.825 20:22:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.825 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.825 ************************************ 00:09:43.825 START TEST bdev_raid 00:09:43.825 ************************************ 00:09:43.825 20:22:37 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:43.825 * Looking for test storage... 00:09:44.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.084 20:22:37 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.084 --rc genhtml_branch_coverage=1 00:09:44.084 --rc genhtml_function_coverage=1 00:09:44.084 --rc genhtml_legend=1 00:09:44.084 --rc geninfo_all_blocks=1 00:09:44.084 --rc geninfo_unexecuted_blocks=1 00:09:44.084 00:09:44.084 ' 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.084 --rc genhtml_branch_coverage=1 00:09:44.084 --rc genhtml_function_coverage=1 00:09:44.084 --rc genhtml_legend=1 00:09:44.084 --rc geninfo_all_blocks=1 00:09:44.084 --rc geninfo_unexecuted_blocks=1 00:09:44.084 00:09:44.084 ' 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.084 --rc genhtml_branch_coverage=1 00:09:44.084 --rc genhtml_function_coverage=1 00:09:44.084 --rc genhtml_legend=1 00:09:44.084 --rc geninfo_all_blocks=1 00:09:44.084 --rc geninfo_unexecuted_blocks=1 00:09:44.084 00:09:44.084 ' 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.084 --rc genhtml_branch_coverage=1 00:09:44.084 --rc genhtml_function_coverage=1 00:09:44.084 --rc genhtml_legend=1 00:09:44.084 --rc geninfo_all_blocks=1 00:09:44.084 --rc geninfo_unexecuted_blocks=1 00:09:44.084 00:09:44.084 ' 00:09:44.084 20:22:37 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:44.084 20:22:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:44.084 20:22:37 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:44.084 20:22:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:44.084 20:22:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:44.084 20:22:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:44.084 20:22:37 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:44.084 20:22:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.085 20:22:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.085 20:22:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.085 ************************************ 00:09:44.085 START TEST raid1_resize_data_offset_test 00:09:44.085 ************************************ 00:09:44.085 Process raid pid: 60245 00:09:44.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60245 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60245' 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60245 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60245 ']' 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.085 20:22:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.085 [2024-11-26 20:22:37.605336] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:44.085 [2024-11-26 20:22:37.605571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.344 [2024-11-26 20:22:37.787032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.605 [2024-11-26 20:22:37.922829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.605 [2024-11-26 20:22:38.154747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.605 [2024-11-26 20:22:38.154893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.174 malloc0 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.174 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.433 malloc1 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.433 null0 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.433 [2024-11-26 20:22:38.749553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:45.433 [2024-11-26 20:22:38.751838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.433 [2024-11-26 20:22:38.751988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:45.433 [2024-11-26 20:22:38.752272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:45.433 [2024-11-26 20:22:38.752332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:45.433 [2024-11-26 20:22:38.752719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:45.433 [2024-11-26 20:22:38.752984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:45.433 [2024-11-26 20:22:38.753044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:45.433 [2024-11-26 20:22:38.753327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.433 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.434 [2024-11-26 20:22:38.813490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.434 20:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.001 malloc2 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.001 [2024-11-26 20:22:39.452484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:46.001 [2024-11-26 20:22:39.471966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.001 [2024-11-26 20:22:39.474116] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60245 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60245 ']' 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60245 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.001 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60245 00:09:46.260 killing process with pid 60245 00:09:46.260 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.260 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.260 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60245' 00:09:46.260 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60245 00:09:46.260 20:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60245 00:09:46.260 [2024-11-26 20:22:39.568888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.260 [2024-11-26 20:22:39.569667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:46.260 [2024-11-26 20:22:39.569735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.260 [2024-11-26 20:22:39.569755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:46.260 [2024-11-26 20:22:39.611719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.260 [2024-11-26 20:22:39.612060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.260 [2024-11-26 20:22:39.612078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:48.172 [2024-11-26 20:22:41.595212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.551 ************************************ 00:09:49.551 END TEST raid1_resize_data_offset_test 00:09:49.551 20:22:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:49.551 00:09:49.551 real 0m5.272s 00:09:49.551 user 0m5.233s 00:09:49.551 sys 0m0.589s 00:09:49.551 20:22:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.551 20:22:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.551 ************************************ 00:09:49.551 20:22:42 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:49.551 20:22:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.551 20:22:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.551 20:22:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.551 ************************************ 00:09:49.551 START TEST raid0_resize_superblock_test 00:09:49.551 ************************************ 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60334 00:09:49.551 Process raid pid: 60334 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60334' 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60334 00:09:49.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60334 ']' 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.551 20:22:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.552 [2024-11-26 20:22:42.914711] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:49.552 [2024-11-26 20:22:42.914843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.552 [2024-11-26 20:22:43.092858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.856 [2024-11-26 20:22:43.212829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.132 [2024-11-26 20:22:43.436950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.132 [2024-11-26 20:22:43.436986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.392 20:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.392 20:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.392 20:22:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:50.392 20:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.392 20:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 malloc0 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 [2024-11-26 20:22:44.388895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:50.960 [2024-11-26 20:22:44.389027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.960 [2024-11-26 20:22:44.389111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:50.960 [2024-11-26 20:22:44.389172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.960 [2024-11-26 20:22:44.391825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.960 [2024-11-26 20:22:44.391918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:50.960 pt0 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 cc957fed-5b22-4f85-8cf3-be9f5ae64144 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 b169a263-d307-438f-9849-931b1089a123 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.960 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.219 634edaf6-a7f8-404d-86bc-f4ee0b5d90d3 00:09:51.219 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.219 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:51.219 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:51.219 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.219 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.219 [2024-11-26 20:22:44.526005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b169a263-d307-438f-9849-931b1089a123 is claimed 00:09:51.219 [2024-11-26 20:22:44.526096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 634edaf6-a7f8-404d-86bc-f4ee0b5d90d3 is claimed 00:09:51.219 [2024-11-26 20:22:44.526225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:51.219 [2024-11-26 20:22:44.526240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:51.219 [2024-11-26 20:22:44.526550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:51.219 [2024-11-26 20:22:44.526775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:51.219 [2024-11-26 20:22:44.526793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:51.219 [2024-11-26 20:22:44.526972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:51.220 [2024-11-26 20:22:44.638107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 [2024-11-26 20:22:44.690062] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:51.220 [2024-11-26 20:22:44.690097] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b169a263-d307-438f-9849-931b1089a123' was resized: old size 131072, new size 204800 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 [2024-11-26 20:22:44.701938] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:51.220 [2024-11-26 20:22:44.701970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '634edaf6-a7f8-404d-86bc-f4ee0b5d90d3' was resized: old size 131072, new size 204800 00:09:51.220 [2024-11-26 20:22:44.702007] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:51.220 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.480 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:51.480 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:51.480 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:51.480 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:51.480 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 [2024-11-26 20:22:44.805905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 [2024-11-26 20:22:44.845560] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:51.481 [2024-11-26 20:22:44.845654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:51.481 [2024-11-26 20:22:44.845677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.481 [2024-11-26 20:22:44.845697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:51.481 [2024-11-26 20:22:44.845855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.481 [2024-11-26 20:22:44.845911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.481 [2024-11-26 20:22:44.845933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 [2024-11-26 20:22:44.857409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:51.481 [2024-11-26 20:22:44.857470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.481 [2024-11-26 20:22:44.857509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:51.481 [2024-11-26 20:22:44.857525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.481 [2024-11-26 20:22:44.859841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.481 [2024-11-26 20:22:44.859883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:51.481 [2024-11-26 20:22:44.861948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b169a263-d307-438f-9849-931b1089a123 00:09:51.481 [2024-11-26 20:22:44.862033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b169a263-d307-438f-9849-931b1089a123 is claimed 00:09:51.481 [2024-11-26 20:22:44.862135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 634edaf6-a7f8-404d-86bc-f4ee0b5d90d3 00:09:51.481 [2024-11-26 20:22:44.862155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 634edaf6-a7f8-404d-86bc-f4ee0b5d90d3 is claimed 00:09:51.481 [2024-11-26 20:22:44.862361] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 634edaf6-a7f8-404d-86bc-f4ee0b5d90d3 (2) smaller than existing raid bdev Raid (3) 00:09:51.481 [2024-11-26 20:22:44.862389] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b169a263-d307-438f-9849-931b1089a123: File exists 00:09:51.481 [2024-11-26 20:22:44.862430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:51.481 [2024-11-26 20:22:44.862447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:51.481 [2024-11-26 20:22:44.862761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:51.481 pt0 00:09:51.481 [2024-11-26 20:22:44.863005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:51.481 [2024-11-26 20:22:44.863020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:51.481 [2024-11-26 20:22:44.863192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 [2024-11-26 20:22:44.886206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60334 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60334 ']' 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60334 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60334 00:09:51.481 killing process with pid 60334 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60334' 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60334 00:09:51.481 [2024-11-26 20:22:44.959708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.481 [2024-11-26 20:22:44.959793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.481 [2024-11-26 20:22:44.959845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.481 [2024-11-26 20:22:44.959855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:51.481 20:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60334 00:09:53.389 [2024-11-26 20:22:46.524296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.350 20:22:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:54.350 00:09:54.350 real 0m4.878s 00:09:54.350 user 0m5.122s 00:09:54.350 sys 0m0.565s 00:09:54.350 20:22:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.350 20:22:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.350 ************************************ 00:09:54.350 END TEST raid0_resize_superblock_test 00:09:54.350 ************************************ 00:09:54.350 20:22:47 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:54.350 20:22:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.350 20:22:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.350 20:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.350 ************************************ 00:09:54.350 START TEST raid1_resize_superblock_test 00:09:54.350 ************************************ 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60433 00:09:54.350 Process raid pid: 60433 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60433' 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60433 00:09:54.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60433 ']' 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.350 20:22:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.350 [2024-11-26 20:22:47.874603] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:54.350 [2024-11-26 20:22:47.874724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.609 [2024-11-26 20:22:48.064451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.868 [2024-11-26 20:22:48.174481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.868 [2024-11-26 20:22:48.402398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.868 [2024-11-26 20:22:48.402455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.438 20:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.438 20:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.438 20:22:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:55.438 20:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.438 20:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 malloc0 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 [2024-11-26 20:22:49.358643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:56.007 [2024-11-26 20:22:49.358713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.007 [2024-11-26 20:22:49.358748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:56.007 [2024-11-26 20:22:49.358763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.007 [2024-11-26 20:22:49.361196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.007 [2024-11-26 20:22:49.361256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:56.007 pt0 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 209f3e0d-330d-4397-af2e-a8da6a59c252 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 c848947e-cad2-46fe-86b0-53d71c175800 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 ea1e9694-c158-44dc-b39a-5c8b59ab2a26 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 [2024-11-26 20:22:49.495577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c848947e-cad2-46fe-86b0-53d71c175800 is claimed 00:09:56.007 [2024-11-26 20:22:49.495696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ea1e9694-c158-44dc-b39a-5c8b59ab2a26 is claimed 00:09:56.007 [2024-11-26 20:22:49.495864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:56.007 [2024-11-26 20:22:49.495882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:56.007 [2024-11-26 20:22:49.496198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:56.007 [2024-11-26 20:22:49.496461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:56.007 [2024-11-26 20:22:49.496476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:56.007 [2024-11-26 20:22:49.496697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:56.266 [2024-11-26 20:22:49.595745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 [2024-11-26 20:22:49.643680] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:56.266 [2024-11-26 20:22:49.643720] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c848947e-cad2-46fe-86b0-53d71c175800' was resized: old size 131072, new size 204800 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 [2024-11-26 20:22:49.655589] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:56.266 [2024-11-26 20:22:49.655618] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ea1e9694-c158-44dc-b39a-5c8b59ab2a26' was resized: old size 131072, new size 204800 00:09:56.266 [2024-11-26 20:22:49.655651] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 [2024-11-26 20:22:49.763490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.266 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 [2024-11-26 20:22:49.807153] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:56.266 [2024-11-26 20:22:49.807304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:56.266 [2024-11-26 20:22:49.807385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:56.266 [2024-11-26 20:22:49.807600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.266 [2024-11-26 20:22:49.807861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.266 [2024-11-26 20:22:49.807971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.266 [2024-11-26 20:22:49.808026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:56.267 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.267 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:56.267 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.267 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.267 [2024-11-26 20:22:49.815023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:56.267 [2024-11-26 20:22:49.815090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.267 [2024-11-26 20:22:49.815119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:56.267 [2024-11-26 20:22:49.815139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.267 [2024-11-26 20:22:49.817748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.267 [2024-11-26 20:22:49.817795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:56.526 [2024-11-26 20:22:49.819697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c848947e-cad2-46fe-86b0-53d71c175800 00:09:56.526 [2024-11-26 20:22:49.819777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c848947e-cad2-46fe-86b0-53d71c175800 is claimed 00:09:56.526 [2024-11-26 20:22:49.819903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ea1e9694-c158-44dc-b39a-5c8b59ab2a26 00:09:56.526 [2024-11-26 20:22:49.819924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ea1e9694-c158-44dc-b39a-5c8b59ab2a26 is claimed 00:09:56.526 [2024-11-26 20:22:49.820120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ea1e9694-c158-44dc-b39a-5c8b59ab2a26 (2) smaller than existing raid bdev Raid (3) 00:09:56.526 [2024-11-26 20:22:49.820147] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c848947e-cad2-46fe-86b0-53d71c175800: File exists 00:09:56.526 [2024-11-26 20:22:49.820195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:56.526 [2024-11-26 20:22:49.820214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:56.526 pt0 00:09:56.526 [2024-11-26 20:22:49.820542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:56.526 [2024-11-26 20:22:49.820763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:56.526 [2024-11-26 20:22:49.820775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:56.526 [2024-11-26 20:22:49.820963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.526 [2024-11-26 20:22:49.836030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60433 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60433 ']' 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60433 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60433 00:09:56.526 killing process with pid 60433 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60433' 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60433 00:09:56.526 [2024-11-26 20:22:49.925782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.526 [2024-11-26 20:22:49.925877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.526 20:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60433 00:09:56.526 [2024-11-26 20:22:49.925935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.526 [2024-11-26 20:22:49.925944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:58.431 [2024-11-26 20:22:51.500409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.367 20:22:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:59.367 00:09:59.367 real 0m4.922s 00:09:59.367 user 0m5.136s 00:09:59.367 sys 0m0.594s 00:09:59.367 ************************************ 00:09:59.367 END TEST raid1_resize_superblock_test 00:09:59.367 ************************************ 00:09:59.367 20:22:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.367 20:22:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.367 20:22:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:59.367 20:22:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:59.367 20:22:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:59.367 20:22:52 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:59.367 20:22:52 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:59.367 20:22:52 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:59.367 20:22:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.367 20:22:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.367 20:22:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.367 ************************************ 00:09:59.367 START TEST raid_function_test_raid0 00:09:59.367 ************************************ 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60541 00:09:59.367 Process raid pid: 60541 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60541' 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60541 00:09:59.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60541 ']' 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.367 20:22:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:59.367 [2024-11-26 20:22:52.887581] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:09:59.367 [2024-11-26 20:22:52.887701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.627 [2024-11-26 20:22:53.066743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.886 [2024-11-26 20:22:53.184861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.886 [2024-11-26 20:22:53.391812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.886 [2024-11-26 20:22:53.391858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 Base_1 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 Base_2 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 [2024-11-26 20:22:53.842483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:00.455 [2024-11-26 20:22:53.844592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:00.455 [2024-11-26 20:22:53.844733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:00.455 [2024-11-26 20:22:53.844786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:00.455 [2024-11-26 20:22:53.845128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:00.455 [2024-11-26 20:22:53.845379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:00.455 [2024-11-26 20:22:53.845426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:00.455 [2024-11-26 20:22:53.845634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:00.455 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:00.456 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:00.456 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:00.456 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:00.456 20:22:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:00.717 [2024-11-26 20:22:54.098157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:00.717 /dev/nbd0 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:00.717 1+0 records in 00:10:00.717 1+0 records out 00:10:00.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211871 s, 19.3 MB/s 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:00.717 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:00.977 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:00.977 { 00:10:00.977 "nbd_device": "/dev/nbd0", 00:10:00.977 "bdev_name": "raid" 00:10:00.977 } 00:10:00.977 ]' 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:00.978 { 00:10:00.978 "nbd_device": "/dev/nbd0", 00:10:00.978 "bdev_name": "raid" 00:10:00.978 } 00:10:00.978 ]' 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:00.978 4096+0 records in 00:10:00.978 4096+0 records out 00:10:00.978 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0277383 s, 75.6 MB/s 00:10:00.978 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:01.238 4096+0 records in 00:10:01.238 4096+0 records out 00:10:01.238 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217917 s, 9.6 MB/s 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:01.238 128+0 records in 00:10:01.238 128+0 records out 00:10:01.238 65536 bytes (66 kB, 64 KiB) copied, 0.00125825 s, 52.1 MB/s 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:01.238 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:01.498 2035+0 records in 00:10:01.498 2035+0 records out 00:10:01.498 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0158219 s, 65.9 MB/s 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:01.498 456+0 records in 00:10:01.498 456+0 records out 00:10:01.498 233472 bytes (233 kB, 228 KiB) copied, 0.00390389 s, 59.8 MB/s 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.498 20:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:01.758 [2024-11-26 20:22:55.096579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.758 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:02.020 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60541 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60541 ']' 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60541 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60541 00:10:02.021 killing process with pid 60541 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60541' 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60541 00:10:02.021 [2024-11-26 20:22:55.478024] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.021 [2024-11-26 20:22:55.478155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.021 20:22:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60541 00:10:02.021 [2024-11-26 20:22:55.478209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.021 [2024-11-26 20:22:55.478226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:02.285 [2024-11-26 20:22:55.706650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.666 20:22:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:10:03.666 00:10:03.666 real 0m4.134s 00:10:03.666 user 0m4.861s 00:10:03.666 sys 0m0.983s 00:10:03.666 20:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.666 20:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:03.666 ************************************ 00:10:03.666 END TEST raid_function_test_raid0 00:10:03.666 ************************************ 00:10:03.666 20:22:56 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:10:03.666 20:22:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.666 20:22:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.666 20:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.666 ************************************ 00:10:03.666 START TEST raid_function_test_concat 00:10:03.666 ************************************ 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60670 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.666 Process raid pid: 60670 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60670' 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60670 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60670 ']' 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.666 20:22:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:03.666 [2024-11-26 20:22:57.081151] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:03.666 [2024-11-26 20:22:57.081397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.927 [2024-11-26 20:22:57.259416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.927 [2024-11-26 20:22:57.380711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.187 [2024-11-26 20:22:57.604801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.187 [2024-11-26 20:22:57.604850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.446 20:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.446 20:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:04.446 20:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:04.446 20:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.446 20:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:04.705 Base_1 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:04.705 Base_2 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:04.705 [2024-11-26 20:22:58.053814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:04.705 [2024-11-26 20:22:58.055804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:04.705 [2024-11-26 20:22:58.055875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:04.705 [2024-11-26 20:22:58.055887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:04.705 [2024-11-26 20:22:58.056165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:04.705 [2024-11-26 20:22:58.056350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:04.705 [2024-11-26 20:22:58.056362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:04.705 [2024-11-26 20:22:58.056505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:04.705 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:04.964 [2024-11-26 20:22:58.309456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:04.964 /dev/nbd0 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.964 1+0 records in 00:10:04.964 1+0 records out 00:10:04.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326288 s, 12.6 MB/s 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:04.964 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:05.224 { 00:10:05.224 "nbd_device": "/dev/nbd0", 00:10:05.224 "bdev_name": "raid" 00:10:05.224 } 00:10:05.224 ]' 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:05.224 { 00:10:05.224 "nbd_device": "/dev/nbd0", 00:10:05.224 "bdev_name": "raid" 00:10:05.224 } 00:10:05.224 ]' 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:05.224 4096+0 records in 00:10:05.224 4096+0 records out 00:10:05.224 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0319818 s, 65.6 MB/s 00:10:05.224 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:05.484 4096+0 records in 00:10:05.484 4096+0 records out 00:10:05.484 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213654 s, 9.8 MB/s 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:05.484 128+0 records in 00:10:05.484 128+0 records out 00:10:05.484 65536 bytes (66 kB, 64 KiB) copied, 0.00108379 s, 60.5 MB/s 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:05.484 2035+0 records in 00:10:05.484 2035+0 records out 00:10:05.484 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0133429 s, 78.1 MB/s 00:10:05.484 20:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:05.484 456+0 records in 00:10:05.484 456+0 records out 00:10:05.484 233472 bytes (233 kB, 228 KiB) copied, 0.0045704 s, 51.1 MB/s 00:10:05.484 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:05.746 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.746 [2024-11-26 20:22:59.280295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:05.747 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:06.007 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:06.007 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:06.007 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60670 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60670 ']' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60670 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60670 00:10:06.267 killing process with pid 60670 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60670' 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60670 00:10:06.267 [2024-11-26 20:22:59.616827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.267 [2024-11-26 20:22:59.616935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.267 20:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60670 00:10:06.267 [2024-11-26 20:22:59.616994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.267 [2024-11-26 20:22:59.617007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:06.527 [2024-11-26 20:22:59.838167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.920 20:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:07.920 00:10:07.920 real 0m4.055s 00:10:07.920 user 0m4.782s 00:10:07.920 sys 0m0.936s 00:10:07.920 ************************************ 00:10:07.920 END TEST raid_function_test_concat 00:10:07.920 ************************************ 00:10:07.920 20:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.920 20:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:07.920 20:23:01 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:07.920 20:23:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.920 20:23:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.920 20:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.920 ************************************ 00:10:07.920 START TEST raid0_resize_test 00:10:07.920 ************************************ 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:07.920 Process raid pid: 60798 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60798 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60798' 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60798 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60798 ']' 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.920 20:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.920 [2024-11-26 20:23:01.201874] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:07.920 [2024-11-26 20:23:01.202075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.920 [2024-11-26 20:23:01.379913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.180 [2024-11-26 20:23:01.506681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.180 [2024-11-26 20:23:01.733829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.180 [2024-11-26 20:23:01.733951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 Base_1 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 Base_2 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 [2024-11-26 20:23:02.101261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:08.751 [2024-11-26 20:23:02.103163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:08.751 [2024-11-26 20:23:02.103274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:08.751 [2024-11-26 20:23:02.103323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:08.751 [2024-11-26 20:23:02.103609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:08.751 [2024-11-26 20:23:02.103779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:08.751 [2024-11-26 20:23:02.103819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:08.751 [2024-11-26 20:23:02.104000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 [2024-11-26 20:23:02.113197] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:08.751 [2024-11-26 20:23:02.113277] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:08.751 true 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 [2024-11-26 20:23:02.129409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 [2024-11-26 20:23:02.165147] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:08.751 [2024-11-26 20:23:02.165224] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:08.751 [2024-11-26 20:23:02.165282] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:08.751 true 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 [2024-11-26 20:23:02.181321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60798 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60798 ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60798 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60798 00:10:08.751 killing process with pid 60798 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60798' 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60798 00:10:08.751 [2024-11-26 20:23:02.273217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.751 [2024-11-26 20:23:02.273326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.751 [2024-11-26 20:23:02.273384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.751 [2024-11-26 20:23:02.273394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:08.751 20:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60798 00:10:08.751 [2024-11-26 20:23:02.291875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.130 ************************************ 00:10:10.130 END TEST raid0_resize_test 00:10:10.130 20:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:10.130 00:10:10.130 real 0m2.333s 00:10:10.130 user 0m2.506s 00:10:10.130 sys 0m0.343s 00:10:10.130 20:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.130 20:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.130 ************************************ 00:10:10.130 20:23:03 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:10.130 20:23:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.130 20:23:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.130 20:23:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.130 ************************************ 00:10:10.130 START TEST raid1_resize_test 00:10:10.130 ************************************ 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60854 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60854' 00:10:10.130 Process raid pid: 60854 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60854 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60854 ']' 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.130 20:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.130 [2024-11-26 20:23:03.602883] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:10.130 [2024-11-26 20:23:03.602999] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.390 [2024-11-26 20:23:03.780561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.390 [2024-11-26 20:23:03.907501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.649 [2024-11-26 20:23:04.137495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.649 [2024-11-26 20:23:04.137546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.907 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.907 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 Base_1 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 Base_2 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 [2024-11-26 20:23:04.482042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:11.166 [2024-11-26 20:23:04.483805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:11.166 [2024-11-26 20:23:04.483865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:11.166 [2024-11-26 20:23:04.483876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:11.166 [2024-11-26 20:23:04.484120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:11.166 [2024-11-26 20:23:04.484250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:11.166 [2024-11-26 20:23:04.484259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:11.166 [2024-11-26 20:23:04.484394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 [2024-11-26 20:23:04.490011] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:11.166 [2024-11-26 20:23:04.490082] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:11.166 true 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 [2024-11-26 20:23:04.506186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 [2024-11-26 20:23:04.549910] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:11.166 [2024-11-26 20:23:04.549935] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:11.166 [2024-11-26 20:23:04.549965] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:11.167 true 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.167 [2024-11-26 20:23:04.566076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60854 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60854 ']' 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60854 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60854 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60854' 00:10:11.167 killing process with pid 60854 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60854 00:10:11.167 [2024-11-26 20:23:04.633973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.167 [2024-11-26 20:23:04.634129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.167 20:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60854 00:10:11.167 [2024-11-26 20:23:04.634700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.167 [2024-11-26 20:23:04.634779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:11.167 [2024-11-26 20:23:04.652945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.543 20:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:12.544 00:10:12.544 real 0m2.309s 00:10:12.544 user 0m2.464s 00:10:12.544 sys 0m0.337s 00:10:12.544 20:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.544 20:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.544 ************************************ 00:10:12.544 END TEST raid1_resize_test 00:10:12.544 ************************************ 00:10:12.544 20:23:05 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:12.544 20:23:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:12.544 20:23:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:12.544 20:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:12.544 20:23:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.544 20:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.544 ************************************ 00:10:12.544 START TEST raid_state_function_test 00:10:12.544 ************************************ 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60911 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.544 Process raid pid: 60911 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60911' 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60911 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60911 ']' 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.544 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.544 [2024-11-26 20:23:05.994387] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:12.544 [2024-11-26 20:23:05.994592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.803 [2024-11-26 20:23:06.173419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.803 [2024-11-26 20:23:06.300747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.062 [2024-11-26 20:23:06.523772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.062 [2024-11-26 20:23:06.523917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.631 [2024-11-26 20:23:06.884796] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.631 [2024-11-26 20:23:06.884869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.631 [2024-11-26 20:23:06.884883] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.631 [2024-11-26 20:23:06.884894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.631 "name": "Existed_Raid", 00:10:13.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.631 "strip_size_kb": 64, 00:10:13.631 "state": "configuring", 00:10:13.631 "raid_level": "raid0", 00:10:13.631 "superblock": false, 00:10:13.631 "num_base_bdevs": 2, 00:10:13.631 "num_base_bdevs_discovered": 0, 00:10:13.631 "num_base_bdevs_operational": 2, 00:10:13.631 "base_bdevs_list": [ 00:10:13.631 { 00:10:13.631 "name": "BaseBdev1", 00:10:13.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.631 "is_configured": false, 00:10:13.631 "data_offset": 0, 00:10:13.631 "data_size": 0 00:10:13.631 }, 00:10:13.631 { 00:10:13.631 "name": "BaseBdev2", 00:10:13.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.631 "is_configured": false, 00:10:13.631 "data_offset": 0, 00:10:13.631 "data_size": 0 00:10:13.631 } 00:10:13.631 ] 00:10:13.631 }' 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.631 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.891 [2024-11-26 20:23:07.339948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.891 [2024-11-26 20:23:07.340075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.891 [2024-11-26 20:23:07.351920] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.891 [2024-11-26 20:23:07.352026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.891 [2024-11-26 20:23:07.352055] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.891 [2024-11-26 20:23:07.352081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.891 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.891 [2024-11-26 20:23:07.403088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.892 BaseBdev1 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.892 [ 00:10:13.892 { 00:10:13.892 "name": "BaseBdev1", 00:10:13.892 "aliases": [ 00:10:13.892 "0830b484-e27b-40f4-b72c-66a10c6e4156" 00:10:13.892 ], 00:10:13.892 "product_name": "Malloc disk", 00:10:13.892 "block_size": 512, 00:10:13.892 "num_blocks": 65536, 00:10:13.892 "uuid": "0830b484-e27b-40f4-b72c-66a10c6e4156", 00:10:13.892 "assigned_rate_limits": { 00:10:13.892 "rw_ios_per_sec": 0, 00:10:13.892 "rw_mbytes_per_sec": 0, 00:10:13.892 "r_mbytes_per_sec": 0, 00:10:13.892 "w_mbytes_per_sec": 0 00:10:13.892 }, 00:10:13.892 "claimed": true, 00:10:13.892 "claim_type": "exclusive_write", 00:10:13.892 "zoned": false, 00:10:13.892 "supported_io_types": { 00:10:13.892 "read": true, 00:10:13.892 "write": true, 00:10:13.892 "unmap": true, 00:10:13.892 "flush": true, 00:10:13.892 "reset": true, 00:10:13.892 "nvme_admin": false, 00:10:13.892 "nvme_io": false, 00:10:13.892 "nvme_io_md": false, 00:10:13.892 "write_zeroes": true, 00:10:13.892 "zcopy": true, 00:10:13.892 "get_zone_info": false, 00:10:13.892 "zone_management": false, 00:10:13.892 "zone_append": false, 00:10:13.892 "compare": false, 00:10:13.892 "compare_and_write": false, 00:10:13.892 "abort": true, 00:10:13.892 "seek_hole": false, 00:10:13.892 "seek_data": false, 00:10:13.892 "copy": true, 00:10:13.892 "nvme_iov_md": false 00:10:13.892 }, 00:10:13.892 "memory_domains": [ 00:10:13.892 { 00:10:13.892 "dma_device_id": "system", 00:10:13.892 "dma_device_type": 1 00:10:13.892 }, 00:10:13.892 { 00:10:13.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.892 "dma_device_type": 2 00:10:13.892 } 00:10:13.892 ], 00:10:13.892 "driver_specific": {} 00:10:13.892 } 00:10:13.892 ] 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.892 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.151 "name": "Existed_Raid", 00:10:14.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.151 "strip_size_kb": 64, 00:10:14.151 "state": "configuring", 00:10:14.151 "raid_level": "raid0", 00:10:14.151 "superblock": false, 00:10:14.151 "num_base_bdevs": 2, 00:10:14.151 "num_base_bdevs_discovered": 1, 00:10:14.151 "num_base_bdevs_operational": 2, 00:10:14.151 "base_bdevs_list": [ 00:10:14.151 { 00:10:14.151 "name": "BaseBdev1", 00:10:14.151 "uuid": "0830b484-e27b-40f4-b72c-66a10c6e4156", 00:10:14.151 "is_configured": true, 00:10:14.151 "data_offset": 0, 00:10:14.151 "data_size": 65536 00:10:14.151 }, 00:10:14.151 { 00:10:14.151 "name": "BaseBdev2", 00:10:14.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.151 "is_configured": false, 00:10:14.151 "data_offset": 0, 00:10:14.151 "data_size": 0 00:10:14.151 } 00:10:14.151 ] 00:10:14.151 }' 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.151 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.411 [2024-11-26 20:23:07.894334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.411 [2024-11-26 20:23:07.894396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.411 [2024-11-26 20:23:07.906361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.411 [2024-11-26 20:23:07.908445] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.411 [2024-11-26 20:23:07.908554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.411 "name": "Existed_Raid", 00:10:14.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.411 "strip_size_kb": 64, 00:10:14.411 "state": "configuring", 00:10:14.411 "raid_level": "raid0", 00:10:14.411 "superblock": false, 00:10:14.411 "num_base_bdevs": 2, 00:10:14.411 "num_base_bdevs_discovered": 1, 00:10:14.411 "num_base_bdevs_operational": 2, 00:10:14.411 "base_bdevs_list": [ 00:10:14.411 { 00:10:14.411 "name": "BaseBdev1", 00:10:14.411 "uuid": "0830b484-e27b-40f4-b72c-66a10c6e4156", 00:10:14.411 "is_configured": true, 00:10:14.411 "data_offset": 0, 00:10:14.411 "data_size": 65536 00:10:14.411 }, 00:10:14.411 { 00:10:14.411 "name": "BaseBdev2", 00:10:14.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.411 "is_configured": false, 00:10:14.411 "data_offset": 0, 00:10:14.411 "data_size": 0 00:10:14.411 } 00:10:14.411 ] 00:10:14.411 }' 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.411 20:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 [2024-11-26 20:23:08.412228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.979 [2024-11-26 20:23:08.412442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:14.979 [2024-11-26 20:23:08.412474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:14.979 [2024-11-26 20:23:08.412828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:14.979 [2024-11-26 20:23:08.413094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:14.979 [2024-11-26 20:23:08.413149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:14.979 [2024-11-26 20:23:08.413537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.979 BaseBdev2 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 [ 00:10:14.979 { 00:10:14.979 "name": "BaseBdev2", 00:10:14.979 "aliases": [ 00:10:14.979 "526b3ab3-49e2-47f3-846c-7b359da75be2" 00:10:14.979 ], 00:10:14.979 "product_name": "Malloc disk", 00:10:14.979 "block_size": 512, 00:10:14.979 "num_blocks": 65536, 00:10:14.979 "uuid": "526b3ab3-49e2-47f3-846c-7b359da75be2", 00:10:14.979 "assigned_rate_limits": { 00:10:14.979 "rw_ios_per_sec": 0, 00:10:14.979 "rw_mbytes_per_sec": 0, 00:10:14.979 "r_mbytes_per_sec": 0, 00:10:14.979 "w_mbytes_per_sec": 0 00:10:14.979 }, 00:10:14.979 "claimed": true, 00:10:14.979 "claim_type": "exclusive_write", 00:10:14.979 "zoned": false, 00:10:14.979 "supported_io_types": { 00:10:14.979 "read": true, 00:10:14.979 "write": true, 00:10:14.979 "unmap": true, 00:10:14.979 "flush": true, 00:10:14.979 "reset": true, 00:10:14.979 "nvme_admin": false, 00:10:14.979 "nvme_io": false, 00:10:14.979 "nvme_io_md": false, 00:10:14.979 "write_zeroes": true, 00:10:14.979 "zcopy": true, 00:10:14.979 "get_zone_info": false, 00:10:14.979 "zone_management": false, 00:10:14.979 "zone_append": false, 00:10:14.979 "compare": false, 00:10:14.979 "compare_and_write": false, 00:10:14.979 "abort": true, 00:10:14.979 "seek_hole": false, 00:10:14.979 "seek_data": false, 00:10:14.979 "copy": true, 00:10:14.979 "nvme_iov_md": false 00:10:14.979 }, 00:10:14.979 "memory_domains": [ 00:10:14.979 { 00:10:14.979 "dma_device_id": "system", 00:10:14.979 "dma_device_type": 1 00:10:14.979 }, 00:10:14.979 { 00:10:14.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.979 "dma_device_type": 2 00:10:14.979 } 00:10:14.979 ], 00:10:14.979 "driver_specific": {} 00:10:14.979 } 00:10:14.979 ] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.980 "name": "Existed_Raid", 00:10:14.980 "uuid": "31680de4-9508-4189-b8b3-20ce3dd0f68f", 00:10:14.980 "strip_size_kb": 64, 00:10:14.980 "state": "online", 00:10:14.980 "raid_level": "raid0", 00:10:14.980 "superblock": false, 00:10:14.980 "num_base_bdevs": 2, 00:10:14.980 "num_base_bdevs_discovered": 2, 00:10:14.980 "num_base_bdevs_operational": 2, 00:10:14.980 "base_bdevs_list": [ 00:10:14.980 { 00:10:14.980 "name": "BaseBdev1", 00:10:14.980 "uuid": "0830b484-e27b-40f4-b72c-66a10c6e4156", 00:10:14.980 "is_configured": true, 00:10:14.980 "data_offset": 0, 00:10:14.980 "data_size": 65536 00:10:14.980 }, 00:10:14.980 { 00:10:14.980 "name": "BaseBdev2", 00:10:14.980 "uuid": "526b3ab3-49e2-47f3-846c-7b359da75be2", 00:10:14.980 "is_configured": true, 00:10:14.980 "data_offset": 0, 00:10:14.980 "data_size": 65536 00:10:14.980 } 00:10:14.980 ] 00:10:14.980 }' 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.980 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 [2024-11-26 20:23:08.967660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.547 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.547 "name": "Existed_Raid", 00:10:15.547 "aliases": [ 00:10:15.547 "31680de4-9508-4189-b8b3-20ce3dd0f68f" 00:10:15.547 ], 00:10:15.547 "product_name": "Raid Volume", 00:10:15.547 "block_size": 512, 00:10:15.547 "num_blocks": 131072, 00:10:15.547 "uuid": "31680de4-9508-4189-b8b3-20ce3dd0f68f", 00:10:15.547 "assigned_rate_limits": { 00:10:15.547 "rw_ios_per_sec": 0, 00:10:15.547 "rw_mbytes_per_sec": 0, 00:10:15.547 "r_mbytes_per_sec": 0, 00:10:15.547 "w_mbytes_per_sec": 0 00:10:15.547 }, 00:10:15.547 "claimed": false, 00:10:15.547 "zoned": false, 00:10:15.547 "supported_io_types": { 00:10:15.547 "read": true, 00:10:15.547 "write": true, 00:10:15.547 "unmap": true, 00:10:15.547 "flush": true, 00:10:15.547 "reset": true, 00:10:15.547 "nvme_admin": false, 00:10:15.547 "nvme_io": false, 00:10:15.547 "nvme_io_md": false, 00:10:15.547 "write_zeroes": true, 00:10:15.547 "zcopy": false, 00:10:15.547 "get_zone_info": false, 00:10:15.547 "zone_management": false, 00:10:15.547 "zone_append": false, 00:10:15.547 "compare": false, 00:10:15.547 "compare_and_write": false, 00:10:15.547 "abort": false, 00:10:15.547 "seek_hole": false, 00:10:15.547 "seek_data": false, 00:10:15.547 "copy": false, 00:10:15.547 "nvme_iov_md": false 00:10:15.547 }, 00:10:15.547 "memory_domains": [ 00:10:15.547 { 00:10:15.547 "dma_device_id": "system", 00:10:15.547 "dma_device_type": 1 00:10:15.547 }, 00:10:15.547 { 00:10:15.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.547 "dma_device_type": 2 00:10:15.547 }, 00:10:15.547 { 00:10:15.547 "dma_device_id": "system", 00:10:15.547 "dma_device_type": 1 00:10:15.547 }, 00:10:15.547 { 00:10:15.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.547 "dma_device_type": 2 00:10:15.547 } 00:10:15.547 ], 00:10:15.547 "driver_specific": { 00:10:15.547 "raid": { 00:10:15.547 "uuid": "31680de4-9508-4189-b8b3-20ce3dd0f68f", 00:10:15.547 "strip_size_kb": 64, 00:10:15.547 "state": "online", 00:10:15.547 "raid_level": "raid0", 00:10:15.547 "superblock": false, 00:10:15.547 "num_base_bdevs": 2, 00:10:15.547 "num_base_bdevs_discovered": 2, 00:10:15.547 "num_base_bdevs_operational": 2, 00:10:15.547 "base_bdevs_list": [ 00:10:15.547 { 00:10:15.547 "name": "BaseBdev1", 00:10:15.547 "uuid": "0830b484-e27b-40f4-b72c-66a10c6e4156", 00:10:15.547 "is_configured": true, 00:10:15.547 "data_offset": 0, 00:10:15.547 "data_size": 65536 00:10:15.547 }, 00:10:15.547 { 00:10:15.547 "name": "BaseBdev2", 00:10:15.547 "uuid": "526b3ab3-49e2-47f3-846c-7b359da75be2", 00:10:15.547 "is_configured": true, 00:10:15.547 "data_offset": 0, 00:10:15.547 "data_size": 65536 00:10:15.547 } 00:10:15.547 ] 00:10:15.547 } 00:10:15.547 } 00:10:15.547 }' 00:10:15.547 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.547 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.547 BaseBdev2' 00:10:15.547 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.806 [2024-11-26 20:23:09.219023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.806 [2024-11-26 20:23:09.219068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.806 [2024-11-26 20:23:09.219144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.806 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.065 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.065 "name": "Existed_Raid", 00:10:16.065 "uuid": "31680de4-9508-4189-b8b3-20ce3dd0f68f", 00:10:16.065 "strip_size_kb": 64, 00:10:16.065 "state": "offline", 00:10:16.065 "raid_level": "raid0", 00:10:16.065 "superblock": false, 00:10:16.065 "num_base_bdevs": 2, 00:10:16.065 "num_base_bdevs_discovered": 1, 00:10:16.065 "num_base_bdevs_operational": 1, 00:10:16.065 "base_bdevs_list": [ 00:10:16.065 { 00:10:16.065 "name": null, 00:10:16.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.065 "is_configured": false, 00:10:16.065 "data_offset": 0, 00:10:16.065 "data_size": 65536 00:10:16.065 }, 00:10:16.065 { 00:10:16.065 "name": "BaseBdev2", 00:10:16.065 "uuid": "526b3ab3-49e2-47f3-846c-7b359da75be2", 00:10:16.066 "is_configured": true, 00:10:16.066 "data_offset": 0, 00:10:16.066 "data_size": 65536 00:10:16.066 } 00:10:16.066 ] 00:10:16.066 }' 00:10:16.066 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.066 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.325 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.325 [2024-11-26 20:23:09.822037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.325 [2024-11-26 20:23:09.822165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60911 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60911 ']' 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60911 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.585 20:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60911 00:10:16.585 killing process with pid 60911 00:10:16.585 20:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.585 20:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.585 20:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60911' 00:10:16.585 20:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60911 00:10:16.585 [2024-11-26 20:23:10.032224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.585 20:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60911 00:10:16.585 [2024-11-26 20:23:10.052506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:17.965 00:10:17.965 real 0m5.389s 00:10:17.965 user 0m7.809s 00:10:17.965 sys 0m0.812s 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.965 ************************************ 00:10:17.965 END TEST raid_state_function_test 00:10:17.965 ************************************ 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.965 20:23:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:17.965 20:23:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:17.965 20:23:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.965 20:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.965 ************************************ 00:10:17.965 START TEST raid_state_function_test_sb 00:10:17.965 ************************************ 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61171 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61171' 00:10:17.965 Process raid pid: 61171 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61171 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61171 ']' 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.965 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.965 [2024-11-26 20:23:11.455045] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:17.965 [2024-11-26 20:23:11.455293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.224 [2024-11-26 20:23:11.616238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.224 [2024-11-26 20:23:11.744441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.483 [2024-11-26 20:23:12.005684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.483 [2024-11-26 20:23:12.005738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.049 [2024-11-26 20:23:12.371407] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.049 [2024-11-26 20:23:12.371472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.049 [2024-11-26 20:23:12.371485] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.049 [2024-11-26 20:23:12.371496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.049 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.050 "name": "Existed_Raid", 00:10:19.050 "uuid": "382aaf42-cfc2-4f0b-9acc-bea3f101f24a", 00:10:19.050 "strip_size_kb": 64, 00:10:19.050 "state": "configuring", 00:10:19.050 "raid_level": "raid0", 00:10:19.050 "superblock": true, 00:10:19.050 "num_base_bdevs": 2, 00:10:19.050 "num_base_bdevs_discovered": 0, 00:10:19.050 "num_base_bdevs_operational": 2, 00:10:19.050 "base_bdevs_list": [ 00:10:19.050 { 00:10:19.050 "name": "BaseBdev1", 00:10:19.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.050 "is_configured": false, 00:10:19.050 "data_offset": 0, 00:10:19.050 "data_size": 0 00:10:19.050 }, 00:10:19.050 { 00:10:19.050 "name": "BaseBdev2", 00:10:19.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.050 "is_configured": false, 00:10:19.050 "data_offset": 0, 00:10:19.050 "data_size": 0 00:10:19.050 } 00:10:19.050 ] 00:10:19.050 }' 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.050 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.310 [2024-11-26 20:23:12.834549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.310 [2024-11-26 20:23:12.834590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.310 [2024-11-26 20:23:12.846534] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.310 [2024-11-26 20:23:12.846586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.310 [2024-11-26 20:23:12.846598] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.310 [2024-11-26 20:23:12.846612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.310 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.569 [2024-11-26 20:23:12.903091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.569 BaseBdev1 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.569 [ 00:10:19.569 { 00:10:19.569 "name": "BaseBdev1", 00:10:19.569 "aliases": [ 00:10:19.569 "cffa6bf5-d799-4a66-b83e-b5cb419e4991" 00:10:19.569 ], 00:10:19.569 "product_name": "Malloc disk", 00:10:19.569 "block_size": 512, 00:10:19.569 "num_blocks": 65536, 00:10:19.569 "uuid": "cffa6bf5-d799-4a66-b83e-b5cb419e4991", 00:10:19.569 "assigned_rate_limits": { 00:10:19.569 "rw_ios_per_sec": 0, 00:10:19.569 "rw_mbytes_per_sec": 0, 00:10:19.569 "r_mbytes_per_sec": 0, 00:10:19.569 "w_mbytes_per_sec": 0 00:10:19.569 }, 00:10:19.569 "claimed": true, 00:10:19.569 "claim_type": "exclusive_write", 00:10:19.569 "zoned": false, 00:10:19.569 "supported_io_types": { 00:10:19.569 "read": true, 00:10:19.569 "write": true, 00:10:19.569 "unmap": true, 00:10:19.569 "flush": true, 00:10:19.569 "reset": true, 00:10:19.569 "nvme_admin": false, 00:10:19.569 "nvme_io": false, 00:10:19.569 "nvme_io_md": false, 00:10:19.569 "write_zeroes": true, 00:10:19.569 "zcopy": true, 00:10:19.569 "get_zone_info": false, 00:10:19.569 "zone_management": false, 00:10:19.569 "zone_append": false, 00:10:19.569 "compare": false, 00:10:19.569 "compare_and_write": false, 00:10:19.569 "abort": true, 00:10:19.569 "seek_hole": false, 00:10:19.569 "seek_data": false, 00:10:19.569 "copy": true, 00:10:19.569 "nvme_iov_md": false 00:10:19.569 }, 00:10:19.569 "memory_domains": [ 00:10:19.569 { 00:10:19.569 "dma_device_id": "system", 00:10:19.569 "dma_device_type": 1 00:10:19.569 }, 00:10:19.569 { 00:10:19.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.569 "dma_device_type": 2 00:10:19.569 } 00:10:19.569 ], 00:10:19.569 "driver_specific": {} 00:10:19.569 } 00:10:19.569 ] 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.569 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.569 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.569 "name": "Existed_Raid", 00:10:19.569 "uuid": "8f6f1cff-b61a-4e83-bd71-0270cd72e1cb", 00:10:19.569 "strip_size_kb": 64, 00:10:19.569 "state": "configuring", 00:10:19.569 "raid_level": "raid0", 00:10:19.569 "superblock": true, 00:10:19.569 "num_base_bdevs": 2, 00:10:19.569 "num_base_bdevs_discovered": 1, 00:10:19.569 "num_base_bdevs_operational": 2, 00:10:19.569 "base_bdevs_list": [ 00:10:19.569 { 00:10:19.569 "name": "BaseBdev1", 00:10:19.569 "uuid": "cffa6bf5-d799-4a66-b83e-b5cb419e4991", 00:10:19.569 "is_configured": true, 00:10:19.569 "data_offset": 2048, 00:10:19.569 "data_size": 63488 00:10:19.569 }, 00:10:19.569 { 00:10:19.569 "name": "BaseBdev2", 00:10:19.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.569 "is_configured": false, 00:10:19.569 "data_offset": 0, 00:10:19.569 "data_size": 0 00:10:19.569 } 00:10:19.569 ] 00:10:19.569 }' 00:10:19.569 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.569 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.137 [2024-11-26 20:23:13.418316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.137 [2024-11-26 20:23:13.418449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.137 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.137 [2024-11-26 20:23:13.430394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.137 [2024-11-26 20:23:13.432752] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:20.138 [2024-11-26 20:23:13.432853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.138 "name": "Existed_Raid", 00:10:20.138 "uuid": "e5b9a51d-6a48-4454-8124-3b22f50902f1", 00:10:20.138 "strip_size_kb": 64, 00:10:20.138 "state": "configuring", 00:10:20.138 "raid_level": "raid0", 00:10:20.138 "superblock": true, 00:10:20.138 "num_base_bdevs": 2, 00:10:20.138 "num_base_bdevs_discovered": 1, 00:10:20.138 "num_base_bdevs_operational": 2, 00:10:20.138 "base_bdevs_list": [ 00:10:20.138 { 00:10:20.138 "name": "BaseBdev1", 00:10:20.138 "uuid": "cffa6bf5-d799-4a66-b83e-b5cb419e4991", 00:10:20.138 "is_configured": true, 00:10:20.138 "data_offset": 2048, 00:10:20.138 "data_size": 63488 00:10:20.138 }, 00:10:20.138 { 00:10:20.138 "name": "BaseBdev2", 00:10:20.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.138 "is_configured": false, 00:10:20.138 "data_offset": 0, 00:10:20.138 "data_size": 0 00:10:20.138 } 00:10:20.138 ] 00:10:20.138 }' 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.138 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.398 [2024-11-26 20:23:13.947085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.398 [2024-11-26 20:23:13.947422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.398 [2024-11-26 20:23:13.947441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:20.398 [2024-11-26 20:23:13.947743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.398 [2024-11-26 20:23:13.947927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.398 [2024-11-26 20:23:13.947945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.398 BaseBdev2 00:10:20.398 [2024-11-26 20:23:13.948096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.398 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.658 [ 00:10:20.658 { 00:10:20.658 "name": "BaseBdev2", 00:10:20.658 "aliases": [ 00:10:20.658 "dbcc4a32-de6d-46e3-a18e-7dc55701267f" 00:10:20.658 ], 00:10:20.658 "product_name": "Malloc disk", 00:10:20.658 "block_size": 512, 00:10:20.658 "num_blocks": 65536, 00:10:20.658 "uuid": "dbcc4a32-de6d-46e3-a18e-7dc55701267f", 00:10:20.658 "assigned_rate_limits": { 00:10:20.658 "rw_ios_per_sec": 0, 00:10:20.658 "rw_mbytes_per_sec": 0, 00:10:20.658 "r_mbytes_per_sec": 0, 00:10:20.658 "w_mbytes_per_sec": 0 00:10:20.658 }, 00:10:20.658 "claimed": true, 00:10:20.658 "claim_type": "exclusive_write", 00:10:20.658 "zoned": false, 00:10:20.658 "supported_io_types": { 00:10:20.658 "read": true, 00:10:20.658 "write": true, 00:10:20.658 "unmap": true, 00:10:20.658 "flush": true, 00:10:20.658 "reset": true, 00:10:20.658 "nvme_admin": false, 00:10:20.658 "nvme_io": false, 00:10:20.658 "nvme_io_md": false, 00:10:20.658 "write_zeroes": true, 00:10:20.658 "zcopy": true, 00:10:20.658 "get_zone_info": false, 00:10:20.658 "zone_management": false, 00:10:20.658 "zone_append": false, 00:10:20.658 "compare": false, 00:10:20.658 "compare_and_write": false, 00:10:20.658 "abort": true, 00:10:20.658 "seek_hole": false, 00:10:20.658 "seek_data": false, 00:10:20.658 "copy": true, 00:10:20.658 "nvme_iov_md": false 00:10:20.658 }, 00:10:20.658 "memory_domains": [ 00:10:20.658 { 00:10:20.658 "dma_device_id": "system", 00:10:20.658 "dma_device_type": 1 00:10:20.658 }, 00:10:20.658 { 00:10:20.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.658 "dma_device_type": 2 00:10:20.658 } 00:10:20.658 ], 00:10:20.658 "driver_specific": {} 00:10:20.658 } 00:10:20.658 ] 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.658 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.658 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.658 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.658 "name": "Existed_Raid", 00:10:20.658 "uuid": "e5b9a51d-6a48-4454-8124-3b22f50902f1", 00:10:20.658 "strip_size_kb": 64, 00:10:20.658 "state": "online", 00:10:20.658 "raid_level": "raid0", 00:10:20.658 "superblock": true, 00:10:20.658 "num_base_bdevs": 2, 00:10:20.658 "num_base_bdevs_discovered": 2, 00:10:20.658 "num_base_bdevs_operational": 2, 00:10:20.658 "base_bdevs_list": [ 00:10:20.658 { 00:10:20.658 "name": "BaseBdev1", 00:10:20.658 "uuid": "cffa6bf5-d799-4a66-b83e-b5cb419e4991", 00:10:20.658 "is_configured": true, 00:10:20.658 "data_offset": 2048, 00:10:20.658 "data_size": 63488 00:10:20.658 }, 00:10:20.658 { 00:10:20.658 "name": "BaseBdev2", 00:10:20.658 "uuid": "dbcc4a32-de6d-46e3-a18e-7dc55701267f", 00:10:20.658 "is_configured": true, 00:10:20.658 "data_offset": 2048, 00:10:20.658 "data_size": 63488 00:10:20.658 } 00:10:20.658 ] 00:10:20.658 }' 00:10:20.658 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.658 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.918 [2024-11-26 20:23:14.438678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.918 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.176 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.176 "name": "Existed_Raid", 00:10:21.176 "aliases": [ 00:10:21.176 "e5b9a51d-6a48-4454-8124-3b22f50902f1" 00:10:21.176 ], 00:10:21.176 "product_name": "Raid Volume", 00:10:21.176 "block_size": 512, 00:10:21.176 "num_blocks": 126976, 00:10:21.176 "uuid": "e5b9a51d-6a48-4454-8124-3b22f50902f1", 00:10:21.176 "assigned_rate_limits": { 00:10:21.176 "rw_ios_per_sec": 0, 00:10:21.176 "rw_mbytes_per_sec": 0, 00:10:21.176 "r_mbytes_per_sec": 0, 00:10:21.176 "w_mbytes_per_sec": 0 00:10:21.176 }, 00:10:21.176 "claimed": false, 00:10:21.176 "zoned": false, 00:10:21.176 "supported_io_types": { 00:10:21.176 "read": true, 00:10:21.176 "write": true, 00:10:21.176 "unmap": true, 00:10:21.176 "flush": true, 00:10:21.176 "reset": true, 00:10:21.176 "nvme_admin": false, 00:10:21.176 "nvme_io": false, 00:10:21.176 "nvme_io_md": false, 00:10:21.176 "write_zeroes": true, 00:10:21.176 "zcopy": false, 00:10:21.176 "get_zone_info": false, 00:10:21.176 "zone_management": false, 00:10:21.176 "zone_append": false, 00:10:21.176 "compare": false, 00:10:21.176 "compare_and_write": false, 00:10:21.176 "abort": false, 00:10:21.176 "seek_hole": false, 00:10:21.176 "seek_data": false, 00:10:21.176 "copy": false, 00:10:21.176 "nvme_iov_md": false 00:10:21.176 }, 00:10:21.176 "memory_domains": [ 00:10:21.176 { 00:10:21.176 "dma_device_id": "system", 00:10:21.176 "dma_device_type": 1 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.176 "dma_device_type": 2 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "dma_device_id": "system", 00:10:21.176 "dma_device_type": 1 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.176 "dma_device_type": 2 00:10:21.176 } 00:10:21.176 ], 00:10:21.176 "driver_specific": { 00:10:21.176 "raid": { 00:10:21.176 "uuid": "e5b9a51d-6a48-4454-8124-3b22f50902f1", 00:10:21.176 "strip_size_kb": 64, 00:10:21.176 "state": "online", 00:10:21.176 "raid_level": "raid0", 00:10:21.176 "superblock": true, 00:10:21.176 "num_base_bdevs": 2, 00:10:21.176 "num_base_bdevs_discovered": 2, 00:10:21.176 "num_base_bdevs_operational": 2, 00:10:21.176 "base_bdevs_list": [ 00:10:21.176 { 00:10:21.177 "name": "BaseBdev1", 00:10:21.177 "uuid": "cffa6bf5-d799-4a66-b83e-b5cb419e4991", 00:10:21.177 "is_configured": true, 00:10:21.177 "data_offset": 2048, 00:10:21.177 "data_size": 63488 00:10:21.177 }, 00:10:21.177 { 00:10:21.177 "name": "BaseBdev2", 00:10:21.177 "uuid": "dbcc4a32-de6d-46e3-a18e-7dc55701267f", 00:10:21.177 "is_configured": true, 00:10:21.177 "data_offset": 2048, 00:10:21.177 "data_size": 63488 00:10:21.177 } 00:10:21.177 ] 00:10:21.177 } 00:10:21.177 } 00:10:21.177 }' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:21.177 BaseBdev2' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.177 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.177 [2024-11-26 20:23:14.689990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.177 [2024-11-26 20:23:14.690032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.177 [2024-11-26 20:23:14.690092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:21.435 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.436 "name": "Existed_Raid", 00:10:21.436 "uuid": "e5b9a51d-6a48-4454-8124-3b22f50902f1", 00:10:21.436 "strip_size_kb": 64, 00:10:21.436 "state": "offline", 00:10:21.436 "raid_level": "raid0", 00:10:21.436 "superblock": true, 00:10:21.436 "num_base_bdevs": 2, 00:10:21.436 "num_base_bdevs_discovered": 1, 00:10:21.436 "num_base_bdevs_operational": 1, 00:10:21.436 "base_bdevs_list": [ 00:10:21.436 { 00:10:21.436 "name": null, 00:10:21.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.436 "is_configured": false, 00:10:21.436 "data_offset": 0, 00:10:21.436 "data_size": 63488 00:10:21.436 }, 00:10:21.436 { 00:10:21.436 "name": "BaseBdev2", 00:10:21.436 "uuid": "dbcc4a32-de6d-46e3-a18e-7dc55701267f", 00:10:21.436 "is_configured": true, 00:10:21.436 "data_offset": 2048, 00:10:21.436 "data_size": 63488 00:10:21.436 } 00:10:21.436 ] 00:10:21.436 }' 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.436 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.004 [2024-11-26 20:23:15.354047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.004 [2024-11-26 20:23:15.354117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61171 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61171 ']' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61171 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.004 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61171 00:10:22.263 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.263 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.263 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61171' 00:10:22.263 killing process with pid 61171 00:10:22.263 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61171 00:10:22.263 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61171 00:10:22.263 [2024-11-26 20:23:15.567657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.263 [2024-11-26 20:23:15.588043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.641 20:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.641 00:10:23.641 real 0m5.610s 00:10:23.641 user 0m8.055s 00:10:23.641 sys 0m0.868s 00:10:23.641 20:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.641 ************************************ 00:10:23.641 20:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 END TEST raid_state_function_test_sb 00:10:23.641 ************************************ 00:10:23.641 20:23:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:23.641 20:23:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.641 20:23:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.641 20:23:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 ************************************ 00:10:23.641 START TEST raid_superblock_test 00:10:23.641 ************************************ 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61427 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61427 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61427 ']' 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.641 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 [2024-11-26 20:23:17.117585] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:23.641 [2024-11-26 20:23:17.117819] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61427 ] 00:10:23.900 [2024-11-26 20:23:17.297322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.900 [2024-11-26 20:23:17.435306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.162 [2024-11-26 20:23:17.677788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.162 [2024-11-26 20:23:17.677939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.729 malloc1 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.729 [2024-11-26 20:23:18.117433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.729 [2024-11-26 20:23:18.117508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.729 [2024-11-26 20:23:18.117536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:24.729 [2024-11-26 20:23:18.117548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.729 [2024-11-26 20:23:18.120075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.729 [2024-11-26 20:23:18.120119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.729 pt1 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.729 malloc2 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.729 [2024-11-26 20:23:18.179341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.729 [2024-11-26 20:23:18.179421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.729 [2024-11-26 20:23:18.179454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:24.729 [2024-11-26 20:23:18.179465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.729 [2024-11-26 20:23:18.182017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.729 [2024-11-26 20:23:18.182122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.729 pt2 00:10:24.729 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.730 [2024-11-26 20:23:18.191414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.730 [2024-11-26 20:23:18.193601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.730 [2024-11-26 20:23:18.193808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.730 [2024-11-26 20:23:18.193824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:24.730 [2024-11-26 20:23:18.194158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.730 [2024-11-26 20:23:18.194348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.730 [2024-11-26 20:23:18.194368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:24.730 [2024-11-26 20:23:18.194572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.730 "name": "raid_bdev1", 00:10:24.730 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:24.730 "strip_size_kb": 64, 00:10:24.730 "state": "online", 00:10:24.730 "raid_level": "raid0", 00:10:24.730 "superblock": true, 00:10:24.730 "num_base_bdevs": 2, 00:10:24.730 "num_base_bdevs_discovered": 2, 00:10:24.730 "num_base_bdevs_operational": 2, 00:10:24.730 "base_bdevs_list": [ 00:10:24.730 { 00:10:24.730 "name": "pt1", 00:10:24.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.730 "is_configured": true, 00:10:24.730 "data_offset": 2048, 00:10:24.730 "data_size": 63488 00:10:24.730 }, 00:10:24.730 { 00:10:24.730 "name": "pt2", 00:10:24.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.730 "is_configured": true, 00:10:24.730 "data_offset": 2048, 00:10:24.730 "data_size": 63488 00:10:24.730 } 00:10:24.730 ] 00:10:24.730 }' 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.730 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.299 [2024-11-26 20:23:18.658881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.299 "name": "raid_bdev1", 00:10:25.299 "aliases": [ 00:10:25.299 "7fa11f45-da74-4b8e-810f-9a0a96010ed5" 00:10:25.299 ], 00:10:25.299 "product_name": "Raid Volume", 00:10:25.299 "block_size": 512, 00:10:25.299 "num_blocks": 126976, 00:10:25.299 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:25.299 "assigned_rate_limits": { 00:10:25.299 "rw_ios_per_sec": 0, 00:10:25.299 "rw_mbytes_per_sec": 0, 00:10:25.299 "r_mbytes_per_sec": 0, 00:10:25.299 "w_mbytes_per_sec": 0 00:10:25.299 }, 00:10:25.299 "claimed": false, 00:10:25.299 "zoned": false, 00:10:25.299 "supported_io_types": { 00:10:25.299 "read": true, 00:10:25.299 "write": true, 00:10:25.299 "unmap": true, 00:10:25.299 "flush": true, 00:10:25.299 "reset": true, 00:10:25.299 "nvme_admin": false, 00:10:25.299 "nvme_io": false, 00:10:25.299 "nvme_io_md": false, 00:10:25.299 "write_zeroes": true, 00:10:25.299 "zcopy": false, 00:10:25.299 "get_zone_info": false, 00:10:25.299 "zone_management": false, 00:10:25.299 "zone_append": false, 00:10:25.299 "compare": false, 00:10:25.299 "compare_and_write": false, 00:10:25.299 "abort": false, 00:10:25.299 "seek_hole": false, 00:10:25.299 "seek_data": false, 00:10:25.299 "copy": false, 00:10:25.299 "nvme_iov_md": false 00:10:25.299 }, 00:10:25.299 "memory_domains": [ 00:10:25.299 { 00:10:25.299 "dma_device_id": "system", 00:10:25.299 "dma_device_type": 1 00:10:25.299 }, 00:10:25.299 { 00:10:25.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.299 "dma_device_type": 2 00:10:25.299 }, 00:10:25.299 { 00:10:25.299 "dma_device_id": "system", 00:10:25.299 "dma_device_type": 1 00:10:25.299 }, 00:10:25.299 { 00:10:25.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.299 "dma_device_type": 2 00:10:25.299 } 00:10:25.299 ], 00:10:25.299 "driver_specific": { 00:10:25.299 "raid": { 00:10:25.299 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:25.299 "strip_size_kb": 64, 00:10:25.299 "state": "online", 00:10:25.299 "raid_level": "raid0", 00:10:25.299 "superblock": true, 00:10:25.299 "num_base_bdevs": 2, 00:10:25.299 "num_base_bdevs_discovered": 2, 00:10:25.299 "num_base_bdevs_operational": 2, 00:10:25.299 "base_bdevs_list": [ 00:10:25.299 { 00:10:25.299 "name": "pt1", 00:10:25.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.299 "is_configured": true, 00:10:25.299 "data_offset": 2048, 00:10:25.299 "data_size": 63488 00:10:25.299 }, 00:10:25.299 { 00:10:25.299 "name": "pt2", 00:10:25.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.299 "is_configured": true, 00:10:25.299 "data_offset": 2048, 00:10:25.299 "data_size": 63488 00:10:25.299 } 00:10:25.299 ] 00:10:25.299 } 00:10:25.299 } 00:10:25.299 }' 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.299 pt2' 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.299 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 [2024-11-26 20:23:18.918484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7fa11f45-da74-4b8e-810f-9a0a96010ed5 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7fa11f45-da74-4b8e-810f-9a0a96010ed5 ']' 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 [2024-11-26 20:23:18.966037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.558 [2024-11-26 20:23:18.966076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.558 [2024-11-26 20:23:18.966183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.558 [2024-11-26 20:23:18.966254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.558 [2024-11-26 20:23:18.966270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.558 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.559 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:25.559 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.559 [2024-11-26 20:23:19.093869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:25.559 [2024-11-26 20:23:19.096132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:25.559 [2024-11-26 20:23:19.096274] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:25.559 [2024-11-26 20:23:19.096396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:25.559 [2024-11-26 20:23:19.096458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.559 [2024-11-26 20:23:19.096509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:25.559 request: 00:10:25.559 { 00:10:25.559 "name": "raid_bdev1", 00:10:25.559 "raid_level": "raid0", 00:10:25.559 "base_bdevs": [ 00:10:25.559 "malloc1", 00:10:25.559 "malloc2" 00:10:25.559 ], 00:10:25.559 "strip_size_kb": 64, 00:10:25.559 "superblock": false, 00:10:25.559 "method": "bdev_raid_create", 00:10:25.559 "req_id": 1 00:10:25.559 } 00:10:25.559 Got JSON-RPC error response 00:10:25.559 response: 00:10:25.559 { 00:10:25.559 "code": -17, 00:10:25.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:25.559 } 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.559 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.833 [2024-11-26 20:23:19.161726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.833 [2024-11-26 20:23:19.161864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.833 [2024-11-26 20:23:19.161912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:25.833 [2024-11-26 20:23:19.161958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.833 [2024-11-26 20:23:19.164605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.833 [2024-11-26 20:23:19.164697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.833 [2024-11-26 20:23:19.164833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.833 [2024-11-26 20:23:19.164944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.833 pt1 00:10:25.833 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.834 "name": "raid_bdev1", 00:10:25.834 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:25.834 "strip_size_kb": 64, 00:10:25.834 "state": "configuring", 00:10:25.834 "raid_level": "raid0", 00:10:25.834 "superblock": true, 00:10:25.834 "num_base_bdevs": 2, 00:10:25.834 "num_base_bdevs_discovered": 1, 00:10:25.834 "num_base_bdevs_operational": 2, 00:10:25.834 "base_bdevs_list": [ 00:10:25.834 { 00:10:25.834 "name": "pt1", 00:10:25.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.834 "is_configured": true, 00:10:25.834 "data_offset": 2048, 00:10:25.834 "data_size": 63488 00:10:25.834 }, 00:10:25.834 { 00:10:25.834 "name": null, 00:10:25.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.834 "is_configured": false, 00:10:25.834 "data_offset": 2048, 00:10:25.834 "data_size": 63488 00:10:25.834 } 00:10:25.834 ] 00:10:25.834 }' 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.834 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.404 [2024-11-26 20:23:19.676961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.404 [2024-11-26 20:23:19.677056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.404 [2024-11-26 20:23:19.677083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:26.404 [2024-11-26 20:23:19.677097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.404 [2024-11-26 20:23:19.677643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.404 [2024-11-26 20:23:19.677669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.404 [2024-11-26 20:23:19.677765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.404 [2024-11-26 20:23:19.677798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.404 [2024-11-26 20:23:19.677930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.404 [2024-11-26 20:23:19.677942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:26.404 [2024-11-26 20:23:19.678236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:26.404 [2024-11-26 20:23:19.678444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.404 [2024-11-26 20:23:19.678458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:26.404 [2024-11-26 20:23:19.678625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.404 pt2 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.404 "name": "raid_bdev1", 00:10:26.404 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:26.404 "strip_size_kb": 64, 00:10:26.404 "state": "online", 00:10:26.404 "raid_level": "raid0", 00:10:26.404 "superblock": true, 00:10:26.404 "num_base_bdevs": 2, 00:10:26.404 "num_base_bdevs_discovered": 2, 00:10:26.404 "num_base_bdevs_operational": 2, 00:10:26.404 "base_bdevs_list": [ 00:10:26.404 { 00:10:26.404 "name": "pt1", 00:10:26.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.404 "is_configured": true, 00:10:26.404 "data_offset": 2048, 00:10:26.404 "data_size": 63488 00:10:26.404 }, 00:10:26.404 { 00:10:26.404 "name": "pt2", 00:10:26.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.404 "is_configured": true, 00:10:26.404 "data_offset": 2048, 00:10:26.404 "data_size": 63488 00:10:26.404 } 00:10:26.404 ] 00:10:26.404 }' 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.404 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.663 [2024-11-26 20:23:20.172605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.663 "name": "raid_bdev1", 00:10:26.663 "aliases": [ 00:10:26.663 "7fa11f45-da74-4b8e-810f-9a0a96010ed5" 00:10:26.663 ], 00:10:26.663 "product_name": "Raid Volume", 00:10:26.663 "block_size": 512, 00:10:26.663 "num_blocks": 126976, 00:10:26.663 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:26.663 "assigned_rate_limits": { 00:10:26.663 "rw_ios_per_sec": 0, 00:10:26.663 "rw_mbytes_per_sec": 0, 00:10:26.663 "r_mbytes_per_sec": 0, 00:10:26.663 "w_mbytes_per_sec": 0 00:10:26.663 }, 00:10:26.663 "claimed": false, 00:10:26.663 "zoned": false, 00:10:26.663 "supported_io_types": { 00:10:26.663 "read": true, 00:10:26.663 "write": true, 00:10:26.663 "unmap": true, 00:10:26.663 "flush": true, 00:10:26.663 "reset": true, 00:10:26.663 "nvme_admin": false, 00:10:26.663 "nvme_io": false, 00:10:26.663 "nvme_io_md": false, 00:10:26.663 "write_zeroes": true, 00:10:26.663 "zcopy": false, 00:10:26.663 "get_zone_info": false, 00:10:26.663 "zone_management": false, 00:10:26.663 "zone_append": false, 00:10:26.663 "compare": false, 00:10:26.663 "compare_and_write": false, 00:10:26.663 "abort": false, 00:10:26.663 "seek_hole": false, 00:10:26.663 "seek_data": false, 00:10:26.663 "copy": false, 00:10:26.663 "nvme_iov_md": false 00:10:26.663 }, 00:10:26.663 "memory_domains": [ 00:10:26.663 { 00:10:26.663 "dma_device_id": "system", 00:10:26.663 "dma_device_type": 1 00:10:26.663 }, 00:10:26.663 { 00:10:26.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.663 "dma_device_type": 2 00:10:26.663 }, 00:10:26.663 { 00:10:26.663 "dma_device_id": "system", 00:10:26.663 "dma_device_type": 1 00:10:26.663 }, 00:10:26.663 { 00:10:26.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.663 "dma_device_type": 2 00:10:26.663 } 00:10:26.663 ], 00:10:26.663 "driver_specific": { 00:10:26.663 "raid": { 00:10:26.663 "uuid": "7fa11f45-da74-4b8e-810f-9a0a96010ed5", 00:10:26.663 "strip_size_kb": 64, 00:10:26.663 "state": "online", 00:10:26.663 "raid_level": "raid0", 00:10:26.663 "superblock": true, 00:10:26.663 "num_base_bdevs": 2, 00:10:26.663 "num_base_bdevs_discovered": 2, 00:10:26.663 "num_base_bdevs_operational": 2, 00:10:26.663 "base_bdevs_list": [ 00:10:26.663 { 00:10:26.663 "name": "pt1", 00:10:26.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.663 "is_configured": true, 00:10:26.663 "data_offset": 2048, 00:10:26.663 "data_size": 63488 00:10:26.663 }, 00:10:26.663 { 00:10:26.663 "name": "pt2", 00:10:26.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.663 "is_configured": true, 00:10:26.663 "data_offset": 2048, 00:10:26.663 "data_size": 63488 00:10:26.663 } 00:10:26.663 ] 00:10:26.663 } 00:10:26.663 } 00:10:26.663 }' 00:10:26.663 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.922 pt2' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.922 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.923 [2024-11-26 20:23:20.396203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7fa11f45-da74-4b8e-810f-9a0a96010ed5 '!=' 7fa11f45-da74-4b8e-810f-9a0a96010ed5 ']' 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61427 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61427 ']' 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61427 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61427 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61427' 00:10:26.923 killing process with pid 61427 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61427 00:10:26.923 [2024-11-26 20:23:20.460298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.923 [2024-11-26 20:23:20.460479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.923 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61427 00:10:26.923 [2024-11-26 20:23:20.460581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.923 [2024-11-26 20:23:20.460599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.182 [2024-11-26 20:23:20.716498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.557 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:28.557 00:10:28.557 real 0m5.072s 00:10:28.557 user 0m7.097s 00:10:28.557 sys 0m0.781s 00:10:28.557 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.557 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.557 ************************************ 00:10:28.557 END TEST raid_superblock_test 00:10:28.557 ************************************ 00:10:28.908 20:23:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:28.908 20:23:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.908 20:23:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.908 20:23:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.908 ************************************ 00:10:28.908 START TEST raid_read_error_test 00:10:28.908 ************************************ 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LnwKXK9iBG 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61644 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61644 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61644 ']' 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.908 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.908 [2024-11-26 20:23:22.269731] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:28.908 [2024-11-26 20:23:22.269969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61644 ] 00:10:28.908 [2024-11-26 20:23:22.446652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.167 [2024-11-26 20:23:22.584576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.425 [2024-11-26 20:23:22.827141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.425 [2024-11-26 20:23:22.827216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.684 BaseBdev1_malloc 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.684 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 true 00:10:29.942 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.942 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 [2024-11-26 20:23:23.243439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.943 [2024-11-26 20:23:23.243519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.943 [2024-11-26 20:23:23.243554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.943 [2024-11-26 20:23:23.243572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.943 [2024-11-26 20:23:23.246307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.943 [2024-11-26 20:23:23.246357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.943 BaseBdev1 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 BaseBdev2_malloc 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 true 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 [2024-11-26 20:23:23.318097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.943 [2024-11-26 20:23:23.318181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.943 [2024-11-26 20:23:23.318212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.943 [2024-11-26 20:23:23.318229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.943 [2024-11-26 20:23:23.320907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.943 [2024-11-26 20:23:23.321032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.943 BaseBdev2 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 [2024-11-26 20:23:23.330153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.943 [2024-11-26 20:23:23.332384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.943 [2024-11-26 20:23:23.332645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.943 [2024-11-26 20:23:23.332666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:29.943 [2024-11-26 20:23:23.332973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:29.943 [2024-11-26 20:23:23.333178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.943 [2024-11-26 20:23:23.333195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:29.943 [2024-11-26 20:23:23.333445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.943 "name": "raid_bdev1", 00:10:29.943 "uuid": "901096ea-87fc-4955-b543-b8fa8e7675c5", 00:10:29.943 "strip_size_kb": 64, 00:10:29.943 "state": "online", 00:10:29.943 "raid_level": "raid0", 00:10:29.943 "superblock": true, 00:10:29.943 "num_base_bdevs": 2, 00:10:29.943 "num_base_bdevs_discovered": 2, 00:10:29.943 "num_base_bdevs_operational": 2, 00:10:29.943 "base_bdevs_list": [ 00:10:29.943 { 00:10:29.943 "name": "BaseBdev1", 00:10:29.943 "uuid": "328750b7-e8d1-58e0-8868-c35e2251d664", 00:10:29.943 "is_configured": true, 00:10:29.943 "data_offset": 2048, 00:10:29.943 "data_size": 63488 00:10:29.943 }, 00:10:29.943 { 00:10:29.943 "name": "BaseBdev2", 00:10:29.943 "uuid": "4e289a99-2f73-557f-9a01-845188410933", 00:10:29.943 "is_configured": true, 00:10:29.943 "data_offset": 2048, 00:10:29.943 "data_size": 63488 00:10:29.943 } 00:10:29.943 ] 00:10:29.943 }' 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.943 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.509 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.509 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.509 [2024-11-26 20:23:23.934901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.446 "name": "raid_bdev1", 00:10:31.446 "uuid": "901096ea-87fc-4955-b543-b8fa8e7675c5", 00:10:31.446 "strip_size_kb": 64, 00:10:31.446 "state": "online", 00:10:31.446 "raid_level": "raid0", 00:10:31.446 "superblock": true, 00:10:31.446 "num_base_bdevs": 2, 00:10:31.446 "num_base_bdevs_discovered": 2, 00:10:31.446 "num_base_bdevs_operational": 2, 00:10:31.446 "base_bdevs_list": [ 00:10:31.446 { 00:10:31.446 "name": "BaseBdev1", 00:10:31.446 "uuid": "328750b7-e8d1-58e0-8868-c35e2251d664", 00:10:31.446 "is_configured": true, 00:10:31.446 "data_offset": 2048, 00:10:31.446 "data_size": 63488 00:10:31.446 }, 00:10:31.446 { 00:10:31.446 "name": "BaseBdev2", 00:10:31.446 "uuid": "4e289a99-2f73-557f-9a01-845188410933", 00:10:31.446 "is_configured": true, 00:10:31.446 "data_offset": 2048, 00:10:31.446 "data_size": 63488 00:10:31.446 } 00:10:31.446 ] 00:10:31.446 }' 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.446 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.015 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.015 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.015 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.015 [2024-11-26 20:23:25.291928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.015 [2024-11-26 20:23:25.291980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.015 [2024-11-26 20:23:25.295326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.015 [2024-11-26 20:23:25.295377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.015 [2024-11-26 20:23:25.295412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.015 [2024-11-26 20:23:25.295425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:32.015 { 00:10:32.015 "results": [ 00:10:32.015 { 00:10:32.015 "job": "raid_bdev1", 00:10:32.015 "core_mask": "0x1", 00:10:32.015 "workload": "randrw", 00:10:32.015 "percentage": 50, 00:10:32.015 "status": "finished", 00:10:32.015 "queue_depth": 1, 00:10:32.016 "io_size": 131072, 00:10:32.016 "runtime": 1.357388, 00:10:32.016 "iops": 13044.170126743422, 00:10:32.016 "mibps": 1630.5212658429277, 00:10:32.016 "io_failed": 1, 00:10:32.016 "io_timeout": 0, 00:10:32.016 "avg_latency_us": 106.2755854825627, 00:10:32.016 "min_latency_us": 30.406986899563318, 00:10:32.016 "max_latency_us": 1845.8829694323144 00:10:32.016 } 00:10:32.016 ], 00:10:32.016 "core_count": 1 00:10:32.016 } 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61644 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61644 ']' 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61644 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61644 00:10:32.016 killing process with pid 61644 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61644' 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61644 00:10:32.016 [2024-11-26 20:23:25.337909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.016 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61644 00:10:32.016 [2024-11-26 20:23:25.500054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LnwKXK9iBG 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:33.414 ************************************ 00:10:33.414 END TEST raid_read_error_test 00:10:33.414 ************************************ 00:10:33.414 00:10:33.414 real 0m4.794s 00:10:33.414 user 0m5.820s 00:10:33.414 sys 0m0.532s 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.414 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.673 20:23:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:33.673 20:23:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.673 20:23:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.673 20:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.673 ************************************ 00:10:33.673 START TEST raid_write_error_test 00:10:33.673 ************************************ 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YRZdIjg99h 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61790 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61790 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61790 ']' 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.673 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.673 [2024-11-26 20:23:27.127585] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:33.673 [2024-11-26 20:23:27.127727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61790 ] 00:10:33.932 [2024-11-26 20:23:27.291312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.932 [2024-11-26 20:23:27.427980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.191 [2024-11-26 20:23:27.662449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.191 [2024-11-26 20:23:27.662489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 BaseBdev1_malloc 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 true 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 [2024-11-26 20:23:28.157369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.758 [2024-11-26 20:23:28.157544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.758 [2024-11-26 20:23:28.157580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.758 [2024-11-26 20:23:28.157596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.758 [2024-11-26 20:23:28.160377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.758 [2024-11-26 20:23:28.160424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.758 BaseBdev1 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 BaseBdev2_malloc 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 true 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 [2024-11-26 20:23:28.227000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.758 [2024-11-26 20:23:28.227198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.758 [2024-11-26 20:23:28.227231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.758 [2024-11-26 20:23:28.227267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.758 [2024-11-26 20:23:28.229948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.758 [2024-11-26 20:23:28.230008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.758 BaseBdev2 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.759 [2024-11-26 20:23:28.239119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.759 [2024-11-26 20:23:28.241412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.759 [2024-11-26 20:23:28.241779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.759 [2024-11-26 20:23:28.241810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:34.759 [2024-11-26 20:23:28.242183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:34.759 [2024-11-26 20:23:28.242429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.759 [2024-11-26 20:23:28.242447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:34.759 [2024-11-26 20:23:28.242670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.759 "name": "raid_bdev1", 00:10:34.759 "uuid": "53640292-35ac-4ff7-a731-815ca7cdd40e", 00:10:34.759 "strip_size_kb": 64, 00:10:34.759 "state": "online", 00:10:34.759 "raid_level": "raid0", 00:10:34.759 "superblock": true, 00:10:34.759 "num_base_bdevs": 2, 00:10:34.759 "num_base_bdevs_discovered": 2, 00:10:34.759 "num_base_bdevs_operational": 2, 00:10:34.759 "base_bdevs_list": [ 00:10:34.759 { 00:10:34.759 "name": "BaseBdev1", 00:10:34.759 "uuid": "e5592070-851b-5925-8816-3d36427c9bc3", 00:10:34.759 "is_configured": true, 00:10:34.759 "data_offset": 2048, 00:10:34.759 "data_size": 63488 00:10:34.759 }, 00:10:34.759 { 00:10:34.759 "name": "BaseBdev2", 00:10:34.759 "uuid": "94b2889a-a8e4-5e68-9b5a-8e7be28de55c", 00:10:34.759 "is_configured": true, 00:10:34.759 "data_offset": 2048, 00:10:34.759 "data_size": 63488 00:10:34.759 } 00:10:34.759 ] 00:10:34.759 }' 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.759 20:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.327 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.327 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.327 [2024-11-26 20:23:28.827652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.269 "name": "raid_bdev1", 00:10:36.269 "uuid": "53640292-35ac-4ff7-a731-815ca7cdd40e", 00:10:36.269 "strip_size_kb": 64, 00:10:36.269 "state": "online", 00:10:36.269 "raid_level": "raid0", 00:10:36.269 "superblock": true, 00:10:36.269 "num_base_bdevs": 2, 00:10:36.269 "num_base_bdevs_discovered": 2, 00:10:36.269 "num_base_bdevs_operational": 2, 00:10:36.269 "base_bdevs_list": [ 00:10:36.269 { 00:10:36.269 "name": "BaseBdev1", 00:10:36.269 "uuid": "e5592070-851b-5925-8816-3d36427c9bc3", 00:10:36.269 "is_configured": true, 00:10:36.269 "data_offset": 2048, 00:10:36.269 "data_size": 63488 00:10:36.269 }, 00:10:36.269 { 00:10:36.269 "name": "BaseBdev2", 00:10:36.269 "uuid": "94b2889a-a8e4-5e68-9b5a-8e7be28de55c", 00:10:36.269 "is_configured": true, 00:10:36.269 "data_offset": 2048, 00:10:36.269 "data_size": 63488 00:10:36.269 } 00:10:36.269 ] 00:10:36.269 }' 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.269 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.838 [2024-11-26 20:23:30.148349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.838 [2024-11-26 20:23:30.148392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.838 [2024-11-26 20:23:30.151859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.838 [2024-11-26 20:23:30.151954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.838 [2024-11-26 20:23:30.152023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.838 [2024-11-26 20:23:30.152079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:36.838 { 00:10:36.838 "results": [ 00:10:36.838 { 00:10:36.838 "job": "raid_bdev1", 00:10:36.838 "core_mask": "0x1", 00:10:36.838 "workload": "randrw", 00:10:36.838 "percentage": 50, 00:10:36.838 "status": "finished", 00:10:36.838 "queue_depth": 1, 00:10:36.838 "io_size": 131072, 00:10:36.838 "runtime": 1.32104, 00:10:36.838 "iops": 13066.2205535033, 00:10:36.838 "mibps": 1633.2775691879126, 00:10:36.838 "io_failed": 1, 00:10:36.838 "io_timeout": 0, 00:10:36.838 "avg_latency_us": 105.9212743340624, 00:10:36.838 "min_latency_us": 33.53711790393013, 00:10:36.838 "max_latency_us": 1810.1100436681222 00:10:36.838 } 00:10:36.838 ], 00:10:36.838 "core_count": 1 00:10:36.838 } 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61790 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61790 ']' 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61790 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61790 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61790' 00:10:36.838 killing process with pid 61790 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61790 00:10:36.838 [2024-11-26 20:23:30.199832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.838 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61790 00:10:36.838 [2024-11-26 20:23:30.367643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YRZdIjg99h 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:38.743 00:10:38.743 real 0m4.802s 00:10:38.743 user 0m5.783s 00:10:38.743 sys 0m0.586s 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.743 ************************************ 00:10:38.743 END TEST raid_write_error_test 00:10:38.743 ************************************ 00:10:38.743 20:23:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.743 20:23:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:38.743 20:23:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:38.743 20:23:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.743 20:23:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.743 20:23:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.743 ************************************ 00:10:38.743 START TEST raid_state_function_test 00:10:38.743 ************************************ 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61939 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61939' 00:10:38.743 Process raid pid: 61939 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61939 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61939 ']' 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.743 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.744 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.744 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.744 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.744 [2024-11-26 20:23:31.991171] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:38.744 [2024-11-26 20:23:31.991418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.744 [2024-11-26 20:23:32.172870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.003 [2024-11-26 20:23:32.311499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.263 [2024-11-26 20:23:32.557982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.263 [2024-11-26 20:23:32.558135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.522 [2024-11-26 20:23:32.935467] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.522 [2024-11-26 20:23:32.935534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.522 [2024-11-26 20:23:32.935548] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.522 [2024-11-26 20:23:32.935560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.522 "name": "Existed_Raid", 00:10:39.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.522 "strip_size_kb": 64, 00:10:39.522 "state": "configuring", 00:10:39.522 "raid_level": "concat", 00:10:39.522 "superblock": false, 00:10:39.522 "num_base_bdevs": 2, 00:10:39.522 "num_base_bdevs_discovered": 0, 00:10:39.522 "num_base_bdevs_operational": 2, 00:10:39.522 "base_bdevs_list": [ 00:10:39.522 { 00:10:39.522 "name": "BaseBdev1", 00:10:39.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.522 "is_configured": false, 00:10:39.522 "data_offset": 0, 00:10:39.522 "data_size": 0 00:10:39.522 }, 00:10:39.522 { 00:10:39.522 "name": "BaseBdev2", 00:10:39.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.522 "is_configured": false, 00:10:39.522 "data_offset": 0, 00:10:39.522 "data_size": 0 00:10:39.522 } 00:10:39.522 ] 00:10:39.522 }' 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.522 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.781 [2024-11-26 20:23:33.326944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.781 [2024-11-26 20:23:33.327042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.781 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 [2024-11-26 20:23:33.334927] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.040 [2024-11-26 20:23:33.335023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.040 [2024-11-26 20:23:33.335064] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.040 [2024-11-26 20:23:33.335095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 [2024-11-26 20:23:33.385260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.040 BaseBdev1 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 [ 00:10:40.040 { 00:10:40.040 "name": "BaseBdev1", 00:10:40.040 "aliases": [ 00:10:40.040 "8f668d9d-3087-45a9-b724-068f71e698ef" 00:10:40.040 ], 00:10:40.040 "product_name": "Malloc disk", 00:10:40.040 "block_size": 512, 00:10:40.040 "num_blocks": 65536, 00:10:40.040 "uuid": "8f668d9d-3087-45a9-b724-068f71e698ef", 00:10:40.040 "assigned_rate_limits": { 00:10:40.040 "rw_ios_per_sec": 0, 00:10:40.040 "rw_mbytes_per_sec": 0, 00:10:40.040 "r_mbytes_per_sec": 0, 00:10:40.040 "w_mbytes_per_sec": 0 00:10:40.040 }, 00:10:40.040 "claimed": true, 00:10:40.040 "claim_type": "exclusive_write", 00:10:40.040 "zoned": false, 00:10:40.040 "supported_io_types": { 00:10:40.040 "read": true, 00:10:40.040 "write": true, 00:10:40.040 "unmap": true, 00:10:40.040 "flush": true, 00:10:40.040 "reset": true, 00:10:40.040 "nvme_admin": false, 00:10:40.040 "nvme_io": false, 00:10:40.040 "nvme_io_md": false, 00:10:40.040 "write_zeroes": true, 00:10:40.040 "zcopy": true, 00:10:40.040 "get_zone_info": false, 00:10:40.040 "zone_management": false, 00:10:40.040 "zone_append": false, 00:10:40.040 "compare": false, 00:10:40.040 "compare_and_write": false, 00:10:40.040 "abort": true, 00:10:40.040 "seek_hole": false, 00:10:40.040 "seek_data": false, 00:10:40.040 "copy": true, 00:10:40.040 "nvme_iov_md": false 00:10:40.040 }, 00:10:40.040 "memory_domains": [ 00:10:40.040 { 00:10:40.040 "dma_device_id": "system", 00:10:40.040 "dma_device_type": 1 00:10:40.040 }, 00:10:40.040 { 00:10:40.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.040 "dma_device_type": 2 00:10:40.040 } 00:10:40.040 ], 00:10:40.040 "driver_specific": {} 00:10:40.040 } 00:10:40.040 ] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.040 "name": "Existed_Raid", 00:10:40.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.040 "strip_size_kb": 64, 00:10:40.040 "state": "configuring", 00:10:40.040 "raid_level": "concat", 00:10:40.040 "superblock": false, 00:10:40.040 "num_base_bdevs": 2, 00:10:40.040 "num_base_bdevs_discovered": 1, 00:10:40.040 "num_base_bdevs_operational": 2, 00:10:40.040 "base_bdevs_list": [ 00:10:40.040 { 00:10:40.040 "name": "BaseBdev1", 00:10:40.040 "uuid": "8f668d9d-3087-45a9-b724-068f71e698ef", 00:10:40.040 "is_configured": true, 00:10:40.040 "data_offset": 0, 00:10:40.040 "data_size": 65536 00:10:40.040 }, 00:10:40.040 { 00:10:40.040 "name": "BaseBdev2", 00:10:40.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.040 "is_configured": false, 00:10:40.040 "data_offset": 0, 00:10:40.040 "data_size": 0 00:10:40.040 } 00:10:40.040 ] 00:10:40.040 }' 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.040 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.299 [2024-11-26 20:23:33.844556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.299 [2024-11-26 20:23:33.844638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.299 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 [2024-11-26 20:23:33.856681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.558 [2024-11-26 20:23:33.858922] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.558 [2024-11-26 20:23:33.858983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.558 "name": "Existed_Raid", 00:10:40.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.558 "strip_size_kb": 64, 00:10:40.558 "state": "configuring", 00:10:40.558 "raid_level": "concat", 00:10:40.558 "superblock": false, 00:10:40.558 "num_base_bdevs": 2, 00:10:40.558 "num_base_bdevs_discovered": 1, 00:10:40.558 "num_base_bdevs_operational": 2, 00:10:40.558 "base_bdevs_list": [ 00:10:40.558 { 00:10:40.558 "name": "BaseBdev1", 00:10:40.558 "uuid": "8f668d9d-3087-45a9-b724-068f71e698ef", 00:10:40.558 "is_configured": true, 00:10:40.558 "data_offset": 0, 00:10:40.558 "data_size": 65536 00:10:40.558 }, 00:10:40.558 { 00:10:40.558 "name": "BaseBdev2", 00:10:40.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.558 "is_configured": false, 00:10:40.558 "data_offset": 0, 00:10:40.558 "data_size": 0 00:10:40.558 } 00:10:40.558 ] 00:10:40.558 }' 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.558 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.817 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.817 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.817 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.075 [2024-11-26 20:23:34.413088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.075 [2024-11-26 20:23:34.413273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:41.075 [2024-11-26 20:23:34.413307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:41.075 [2024-11-26 20:23:34.413668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:41.075 [2024-11-26 20:23:34.413932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:41.075 [2024-11-26 20:23:34.413988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:41.075 [2024-11-26 20:23:34.414369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.075 BaseBdev2 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.075 [ 00:10:41.075 { 00:10:41.075 "name": "BaseBdev2", 00:10:41.075 "aliases": [ 00:10:41.075 "627d92cb-604a-4ae4-9aaf-6b1f40d24b43" 00:10:41.075 ], 00:10:41.075 "product_name": "Malloc disk", 00:10:41.075 "block_size": 512, 00:10:41.075 "num_blocks": 65536, 00:10:41.075 "uuid": "627d92cb-604a-4ae4-9aaf-6b1f40d24b43", 00:10:41.075 "assigned_rate_limits": { 00:10:41.075 "rw_ios_per_sec": 0, 00:10:41.075 "rw_mbytes_per_sec": 0, 00:10:41.075 "r_mbytes_per_sec": 0, 00:10:41.075 "w_mbytes_per_sec": 0 00:10:41.075 }, 00:10:41.075 "claimed": true, 00:10:41.075 "claim_type": "exclusive_write", 00:10:41.075 "zoned": false, 00:10:41.075 "supported_io_types": { 00:10:41.075 "read": true, 00:10:41.075 "write": true, 00:10:41.075 "unmap": true, 00:10:41.075 "flush": true, 00:10:41.075 "reset": true, 00:10:41.075 "nvme_admin": false, 00:10:41.075 "nvme_io": false, 00:10:41.075 "nvme_io_md": false, 00:10:41.075 "write_zeroes": true, 00:10:41.075 "zcopy": true, 00:10:41.075 "get_zone_info": false, 00:10:41.075 "zone_management": false, 00:10:41.075 "zone_append": false, 00:10:41.075 "compare": false, 00:10:41.075 "compare_and_write": false, 00:10:41.075 "abort": true, 00:10:41.075 "seek_hole": false, 00:10:41.075 "seek_data": false, 00:10:41.075 "copy": true, 00:10:41.075 "nvme_iov_md": false 00:10:41.075 }, 00:10:41.075 "memory_domains": [ 00:10:41.075 { 00:10:41.075 "dma_device_id": "system", 00:10:41.075 "dma_device_type": 1 00:10:41.075 }, 00:10:41.075 { 00:10:41.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.075 "dma_device_type": 2 00:10:41.075 } 00:10:41.075 ], 00:10:41.075 "driver_specific": {} 00:10:41.075 } 00:10:41.075 ] 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.075 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.076 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.076 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.076 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.076 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.076 "name": "Existed_Raid", 00:10:41.076 "uuid": "5e2903b3-f2ee-4396-b551-3184f7b85b4f", 00:10:41.076 "strip_size_kb": 64, 00:10:41.076 "state": "online", 00:10:41.076 "raid_level": "concat", 00:10:41.076 "superblock": false, 00:10:41.076 "num_base_bdevs": 2, 00:10:41.076 "num_base_bdevs_discovered": 2, 00:10:41.076 "num_base_bdevs_operational": 2, 00:10:41.076 "base_bdevs_list": [ 00:10:41.076 { 00:10:41.076 "name": "BaseBdev1", 00:10:41.076 "uuid": "8f668d9d-3087-45a9-b724-068f71e698ef", 00:10:41.076 "is_configured": true, 00:10:41.076 "data_offset": 0, 00:10:41.076 "data_size": 65536 00:10:41.076 }, 00:10:41.076 { 00:10:41.076 "name": "BaseBdev2", 00:10:41.076 "uuid": "627d92cb-604a-4ae4-9aaf-6b1f40d24b43", 00:10:41.076 "is_configured": true, 00:10:41.076 "data_offset": 0, 00:10:41.076 "data_size": 65536 00:10:41.076 } 00:10:41.076 ] 00:10:41.076 }' 00:10:41.076 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.076 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.645 [2024-11-26 20:23:34.960701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.645 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.645 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.645 "name": "Existed_Raid", 00:10:41.645 "aliases": [ 00:10:41.645 "5e2903b3-f2ee-4396-b551-3184f7b85b4f" 00:10:41.645 ], 00:10:41.645 "product_name": "Raid Volume", 00:10:41.645 "block_size": 512, 00:10:41.645 "num_blocks": 131072, 00:10:41.645 "uuid": "5e2903b3-f2ee-4396-b551-3184f7b85b4f", 00:10:41.645 "assigned_rate_limits": { 00:10:41.645 "rw_ios_per_sec": 0, 00:10:41.645 "rw_mbytes_per_sec": 0, 00:10:41.645 "r_mbytes_per_sec": 0, 00:10:41.645 "w_mbytes_per_sec": 0 00:10:41.645 }, 00:10:41.645 "claimed": false, 00:10:41.645 "zoned": false, 00:10:41.645 "supported_io_types": { 00:10:41.645 "read": true, 00:10:41.645 "write": true, 00:10:41.645 "unmap": true, 00:10:41.645 "flush": true, 00:10:41.645 "reset": true, 00:10:41.645 "nvme_admin": false, 00:10:41.645 "nvme_io": false, 00:10:41.645 "nvme_io_md": false, 00:10:41.645 "write_zeroes": true, 00:10:41.645 "zcopy": false, 00:10:41.645 "get_zone_info": false, 00:10:41.645 "zone_management": false, 00:10:41.645 "zone_append": false, 00:10:41.645 "compare": false, 00:10:41.645 "compare_and_write": false, 00:10:41.645 "abort": false, 00:10:41.645 "seek_hole": false, 00:10:41.645 "seek_data": false, 00:10:41.645 "copy": false, 00:10:41.645 "nvme_iov_md": false 00:10:41.645 }, 00:10:41.645 "memory_domains": [ 00:10:41.645 { 00:10:41.645 "dma_device_id": "system", 00:10:41.645 "dma_device_type": 1 00:10:41.645 }, 00:10:41.645 { 00:10:41.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.645 "dma_device_type": 2 00:10:41.645 }, 00:10:41.645 { 00:10:41.645 "dma_device_id": "system", 00:10:41.645 "dma_device_type": 1 00:10:41.645 }, 00:10:41.645 { 00:10:41.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.645 "dma_device_type": 2 00:10:41.645 } 00:10:41.645 ], 00:10:41.645 "driver_specific": { 00:10:41.645 "raid": { 00:10:41.645 "uuid": "5e2903b3-f2ee-4396-b551-3184f7b85b4f", 00:10:41.645 "strip_size_kb": 64, 00:10:41.645 "state": "online", 00:10:41.645 "raid_level": "concat", 00:10:41.645 "superblock": false, 00:10:41.645 "num_base_bdevs": 2, 00:10:41.645 "num_base_bdevs_discovered": 2, 00:10:41.645 "num_base_bdevs_operational": 2, 00:10:41.645 "base_bdevs_list": [ 00:10:41.645 { 00:10:41.645 "name": "BaseBdev1", 00:10:41.645 "uuid": "8f668d9d-3087-45a9-b724-068f71e698ef", 00:10:41.645 "is_configured": true, 00:10:41.645 "data_offset": 0, 00:10:41.645 "data_size": 65536 00:10:41.645 }, 00:10:41.645 { 00:10:41.645 "name": "BaseBdev2", 00:10:41.645 "uuid": "627d92cb-604a-4ae4-9aaf-6b1f40d24b43", 00:10:41.645 "is_configured": true, 00:10:41.645 "data_offset": 0, 00:10:41.645 "data_size": 65536 00:10:41.645 } 00:10:41.645 ] 00:10:41.645 } 00:10:41.645 } 00:10:41.645 }' 00:10:41.645 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.645 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.645 BaseBdev2' 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.646 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.905 [2024-11-26 20:23:35.203974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.905 [2024-11-26 20:23:35.204080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.905 [2024-11-26 20:23:35.204153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.905 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.905 "name": "Existed_Raid", 00:10:41.905 "uuid": "5e2903b3-f2ee-4396-b551-3184f7b85b4f", 00:10:41.905 "strip_size_kb": 64, 00:10:41.905 "state": "offline", 00:10:41.905 "raid_level": "concat", 00:10:41.905 "superblock": false, 00:10:41.905 "num_base_bdevs": 2, 00:10:41.905 "num_base_bdevs_discovered": 1, 00:10:41.905 "num_base_bdevs_operational": 1, 00:10:41.905 "base_bdevs_list": [ 00:10:41.905 { 00:10:41.905 "name": null, 00:10:41.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.905 "is_configured": false, 00:10:41.905 "data_offset": 0, 00:10:41.905 "data_size": 65536 00:10:41.905 }, 00:10:41.906 { 00:10:41.906 "name": "BaseBdev2", 00:10:41.906 "uuid": "627d92cb-604a-4ae4-9aaf-6b1f40d24b43", 00:10:41.906 "is_configured": true, 00:10:41.906 "data_offset": 0, 00:10:41.906 "data_size": 65536 00:10:41.906 } 00:10:41.906 ] 00:10:41.906 }' 00:10:41.906 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.906 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.472 [2024-11-26 20:23:35.836501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.472 [2024-11-26 20:23:35.836626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.472 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61939 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61939 ']' 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61939 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.472 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61939 00:10:42.731 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.731 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.731 killing process with pid 61939 00:10:42.731 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61939' 00:10:42.731 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61939 00:10:42.731 [2024-11-26 20:23:36.050812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.731 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61939 00:10:42.731 [2024-11-26 20:23:36.071156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:44.107 00:10:44.107 real 0m5.559s 00:10:44.107 user 0m7.971s 00:10:44.107 sys 0m0.855s 00:10:44.107 ************************************ 00:10:44.107 END TEST raid_state_function_test 00:10:44.107 ************************************ 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.107 20:23:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:44.107 20:23:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.107 20:23:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.107 20:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.107 ************************************ 00:10:44.107 START TEST raid_state_function_test_sb 00:10:44.107 ************************************ 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:44.107 Process raid pid: 62192 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62192 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62192' 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62192 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62192 ']' 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.107 20:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.107 [2024-11-26 20:23:37.627055] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:44.107 [2024-11-26 20:23:37.627292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.366 [2024-11-26 20:23:37.810393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.624 [2024-11-26 20:23:37.947529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.883 [2024-11-26 20:23:38.199332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.884 [2024-11-26 20:23:38.199447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.143 [2024-11-26 20:23:38.542570] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.143 [2024-11-26 20:23:38.542692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.143 [2024-11-26 20:23:38.542739] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.143 [2024-11-26 20:23:38.542769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.143 "name": "Existed_Raid", 00:10:45.143 "uuid": "c11355f7-5723-40a4-b0aa-8e40e447172a", 00:10:45.143 "strip_size_kb": 64, 00:10:45.143 "state": "configuring", 00:10:45.143 "raid_level": "concat", 00:10:45.143 "superblock": true, 00:10:45.143 "num_base_bdevs": 2, 00:10:45.143 "num_base_bdevs_discovered": 0, 00:10:45.143 "num_base_bdevs_operational": 2, 00:10:45.143 "base_bdevs_list": [ 00:10:45.143 { 00:10:45.143 "name": "BaseBdev1", 00:10:45.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.143 "is_configured": false, 00:10:45.143 "data_offset": 0, 00:10:45.143 "data_size": 0 00:10:45.143 }, 00:10:45.143 { 00:10:45.143 "name": "BaseBdev2", 00:10:45.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.143 "is_configured": false, 00:10:45.143 "data_offset": 0, 00:10:45.143 "data_size": 0 00:10:45.143 } 00:10:45.143 ] 00:10:45.143 }' 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.143 20:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 [2024-11-26 20:23:39.017738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.712 [2024-11-26 20:23:39.017840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 [2024-11-26 20:23:39.029729] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.712 [2024-11-26 20:23:39.029781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.712 [2024-11-26 20:23:39.029793] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.712 [2024-11-26 20:23:39.029807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 [2024-11-26 20:23:39.086819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.712 BaseBdev1 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 [ 00:10:45.712 { 00:10:45.712 "name": "BaseBdev1", 00:10:45.712 "aliases": [ 00:10:45.712 "5d96a29a-a265-4eca-8e1c-e8910e79079b" 00:10:45.712 ], 00:10:45.712 "product_name": "Malloc disk", 00:10:45.712 "block_size": 512, 00:10:45.712 "num_blocks": 65536, 00:10:45.712 "uuid": "5d96a29a-a265-4eca-8e1c-e8910e79079b", 00:10:45.712 "assigned_rate_limits": { 00:10:45.712 "rw_ios_per_sec": 0, 00:10:45.712 "rw_mbytes_per_sec": 0, 00:10:45.712 "r_mbytes_per_sec": 0, 00:10:45.712 "w_mbytes_per_sec": 0 00:10:45.712 }, 00:10:45.712 "claimed": true, 00:10:45.712 "claim_type": "exclusive_write", 00:10:45.712 "zoned": false, 00:10:45.712 "supported_io_types": { 00:10:45.712 "read": true, 00:10:45.712 "write": true, 00:10:45.712 "unmap": true, 00:10:45.712 "flush": true, 00:10:45.712 "reset": true, 00:10:45.712 "nvme_admin": false, 00:10:45.712 "nvme_io": false, 00:10:45.712 "nvme_io_md": false, 00:10:45.712 "write_zeroes": true, 00:10:45.712 "zcopy": true, 00:10:45.712 "get_zone_info": false, 00:10:45.712 "zone_management": false, 00:10:45.712 "zone_append": false, 00:10:45.712 "compare": false, 00:10:45.712 "compare_and_write": false, 00:10:45.712 "abort": true, 00:10:45.712 "seek_hole": false, 00:10:45.712 "seek_data": false, 00:10:45.712 "copy": true, 00:10:45.712 "nvme_iov_md": false 00:10:45.712 }, 00:10:45.712 "memory_domains": [ 00:10:45.712 { 00:10:45.712 "dma_device_id": "system", 00:10:45.712 "dma_device_type": 1 00:10:45.712 }, 00:10:45.712 { 00:10:45.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.712 "dma_device_type": 2 00:10:45.712 } 00:10:45.712 ], 00:10:45.712 "driver_specific": {} 00:10:45.712 } 00:10:45.712 ] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.712 "name": "Existed_Raid", 00:10:45.712 "uuid": "1f7d8d35-4fe6-49de-91a4-1aaa1bc757f6", 00:10:45.712 "strip_size_kb": 64, 00:10:45.712 "state": "configuring", 00:10:45.712 "raid_level": "concat", 00:10:45.712 "superblock": true, 00:10:45.712 "num_base_bdevs": 2, 00:10:45.712 "num_base_bdevs_discovered": 1, 00:10:45.712 "num_base_bdevs_operational": 2, 00:10:45.712 "base_bdevs_list": [ 00:10:45.712 { 00:10:45.712 "name": "BaseBdev1", 00:10:45.712 "uuid": "5d96a29a-a265-4eca-8e1c-e8910e79079b", 00:10:45.712 "is_configured": true, 00:10:45.712 "data_offset": 2048, 00:10:45.712 "data_size": 63488 00:10:45.712 }, 00:10:45.712 { 00:10:45.712 "name": "BaseBdev2", 00:10:45.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.712 "is_configured": false, 00:10:45.712 "data_offset": 0, 00:10:45.712 "data_size": 0 00:10:45.712 } 00:10:45.712 ] 00:10:45.712 }' 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.712 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 [2024-11-26 20:23:39.574061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.279 [2024-11-26 20:23:39.574190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 [2024-11-26 20:23:39.586093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.279 [2024-11-26 20:23:39.588209] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.279 [2024-11-26 20:23:39.588269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.279 "name": "Existed_Raid", 00:10:46.279 "uuid": "b9a3de6f-0e40-4c96-86f6-39ecc20c62b3", 00:10:46.279 "strip_size_kb": 64, 00:10:46.279 "state": "configuring", 00:10:46.279 "raid_level": "concat", 00:10:46.279 "superblock": true, 00:10:46.279 "num_base_bdevs": 2, 00:10:46.279 "num_base_bdevs_discovered": 1, 00:10:46.279 "num_base_bdevs_operational": 2, 00:10:46.279 "base_bdevs_list": [ 00:10:46.279 { 00:10:46.279 "name": "BaseBdev1", 00:10:46.279 "uuid": "5d96a29a-a265-4eca-8e1c-e8910e79079b", 00:10:46.279 "is_configured": true, 00:10:46.279 "data_offset": 2048, 00:10:46.279 "data_size": 63488 00:10:46.279 }, 00:10:46.279 { 00:10:46.279 "name": "BaseBdev2", 00:10:46.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.279 "is_configured": false, 00:10:46.279 "data_offset": 0, 00:10:46.279 "data_size": 0 00:10:46.279 } 00:10:46.279 ] 00:10:46.279 }' 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.279 20:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.538 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.538 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.538 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.798 [2024-11-26 20:23:40.123594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.798 [2024-11-26 20:23:40.123984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:46.798 [2024-11-26 20:23:40.124048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:46.798 [2024-11-26 20:23:40.124378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:46.798 BaseBdev2 00:10:46.798 [2024-11-26 20:23:40.124624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:46.798 [2024-11-26 20:23:40.124687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:46.798 [2024-11-26 20:23:40.124923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.798 [ 00:10:46.798 { 00:10:46.798 "name": "BaseBdev2", 00:10:46.798 "aliases": [ 00:10:46.798 "5eb4d55a-7bf2-419b-9b8a-aab79bde8117" 00:10:46.798 ], 00:10:46.798 "product_name": "Malloc disk", 00:10:46.798 "block_size": 512, 00:10:46.798 "num_blocks": 65536, 00:10:46.798 "uuid": "5eb4d55a-7bf2-419b-9b8a-aab79bde8117", 00:10:46.798 "assigned_rate_limits": { 00:10:46.798 "rw_ios_per_sec": 0, 00:10:46.798 "rw_mbytes_per_sec": 0, 00:10:46.798 "r_mbytes_per_sec": 0, 00:10:46.798 "w_mbytes_per_sec": 0 00:10:46.798 }, 00:10:46.798 "claimed": true, 00:10:46.798 "claim_type": "exclusive_write", 00:10:46.798 "zoned": false, 00:10:46.798 "supported_io_types": { 00:10:46.798 "read": true, 00:10:46.798 "write": true, 00:10:46.798 "unmap": true, 00:10:46.798 "flush": true, 00:10:46.798 "reset": true, 00:10:46.798 "nvme_admin": false, 00:10:46.798 "nvme_io": false, 00:10:46.798 "nvme_io_md": false, 00:10:46.798 "write_zeroes": true, 00:10:46.798 "zcopy": true, 00:10:46.798 "get_zone_info": false, 00:10:46.798 "zone_management": false, 00:10:46.798 "zone_append": false, 00:10:46.798 "compare": false, 00:10:46.798 "compare_and_write": false, 00:10:46.798 "abort": true, 00:10:46.798 "seek_hole": false, 00:10:46.798 "seek_data": false, 00:10:46.798 "copy": true, 00:10:46.798 "nvme_iov_md": false 00:10:46.798 }, 00:10:46.798 "memory_domains": [ 00:10:46.798 { 00:10:46.798 "dma_device_id": "system", 00:10:46.798 "dma_device_type": 1 00:10:46.798 }, 00:10:46.798 { 00:10:46.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.798 "dma_device_type": 2 00:10:46.798 } 00:10:46.798 ], 00:10:46.798 "driver_specific": {} 00:10:46.798 } 00:10:46.798 ] 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.798 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.798 "name": "Existed_Raid", 00:10:46.798 "uuid": "b9a3de6f-0e40-4c96-86f6-39ecc20c62b3", 00:10:46.798 "strip_size_kb": 64, 00:10:46.798 "state": "online", 00:10:46.798 "raid_level": "concat", 00:10:46.798 "superblock": true, 00:10:46.798 "num_base_bdevs": 2, 00:10:46.798 "num_base_bdevs_discovered": 2, 00:10:46.798 "num_base_bdevs_operational": 2, 00:10:46.798 "base_bdevs_list": [ 00:10:46.798 { 00:10:46.798 "name": "BaseBdev1", 00:10:46.798 "uuid": "5d96a29a-a265-4eca-8e1c-e8910e79079b", 00:10:46.798 "is_configured": true, 00:10:46.798 "data_offset": 2048, 00:10:46.798 "data_size": 63488 00:10:46.798 }, 00:10:46.798 { 00:10:46.798 "name": "BaseBdev2", 00:10:46.798 "uuid": "5eb4d55a-7bf2-419b-9b8a-aab79bde8117", 00:10:46.798 "is_configured": true, 00:10:46.798 "data_offset": 2048, 00:10:46.798 "data_size": 63488 00:10:46.798 } 00:10:46.798 ] 00:10:46.798 }' 00:10:46.799 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.799 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.368 [2024-11-26 20:23:40.675205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.368 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.368 "name": "Existed_Raid", 00:10:47.368 "aliases": [ 00:10:47.368 "b9a3de6f-0e40-4c96-86f6-39ecc20c62b3" 00:10:47.368 ], 00:10:47.368 "product_name": "Raid Volume", 00:10:47.368 "block_size": 512, 00:10:47.368 "num_blocks": 126976, 00:10:47.368 "uuid": "b9a3de6f-0e40-4c96-86f6-39ecc20c62b3", 00:10:47.368 "assigned_rate_limits": { 00:10:47.368 "rw_ios_per_sec": 0, 00:10:47.368 "rw_mbytes_per_sec": 0, 00:10:47.368 "r_mbytes_per_sec": 0, 00:10:47.368 "w_mbytes_per_sec": 0 00:10:47.369 }, 00:10:47.369 "claimed": false, 00:10:47.369 "zoned": false, 00:10:47.369 "supported_io_types": { 00:10:47.369 "read": true, 00:10:47.369 "write": true, 00:10:47.369 "unmap": true, 00:10:47.369 "flush": true, 00:10:47.369 "reset": true, 00:10:47.369 "nvme_admin": false, 00:10:47.369 "nvme_io": false, 00:10:47.369 "nvme_io_md": false, 00:10:47.369 "write_zeroes": true, 00:10:47.369 "zcopy": false, 00:10:47.369 "get_zone_info": false, 00:10:47.369 "zone_management": false, 00:10:47.369 "zone_append": false, 00:10:47.369 "compare": false, 00:10:47.369 "compare_and_write": false, 00:10:47.369 "abort": false, 00:10:47.369 "seek_hole": false, 00:10:47.369 "seek_data": false, 00:10:47.369 "copy": false, 00:10:47.369 "nvme_iov_md": false 00:10:47.369 }, 00:10:47.369 "memory_domains": [ 00:10:47.369 { 00:10:47.369 "dma_device_id": "system", 00:10:47.369 "dma_device_type": 1 00:10:47.369 }, 00:10:47.369 { 00:10:47.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.369 "dma_device_type": 2 00:10:47.369 }, 00:10:47.369 { 00:10:47.369 "dma_device_id": "system", 00:10:47.369 "dma_device_type": 1 00:10:47.369 }, 00:10:47.369 { 00:10:47.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.369 "dma_device_type": 2 00:10:47.369 } 00:10:47.369 ], 00:10:47.369 "driver_specific": { 00:10:47.369 "raid": { 00:10:47.369 "uuid": "b9a3de6f-0e40-4c96-86f6-39ecc20c62b3", 00:10:47.369 "strip_size_kb": 64, 00:10:47.369 "state": "online", 00:10:47.369 "raid_level": "concat", 00:10:47.369 "superblock": true, 00:10:47.369 "num_base_bdevs": 2, 00:10:47.369 "num_base_bdevs_discovered": 2, 00:10:47.369 "num_base_bdevs_operational": 2, 00:10:47.369 "base_bdevs_list": [ 00:10:47.369 { 00:10:47.369 "name": "BaseBdev1", 00:10:47.369 "uuid": "5d96a29a-a265-4eca-8e1c-e8910e79079b", 00:10:47.369 "is_configured": true, 00:10:47.369 "data_offset": 2048, 00:10:47.369 "data_size": 63488 00:10:47.369 }, 00:10:47.369 { 00:10:47.369 "name": "BaseBdev2", 00:10:47.369 "uuid": "5eb4d55a-7bf2-419b-9b8a-aab79bde8117", 00:10:47.369 "is_configured": true, 00:10:47.369 "data_offset": 2048, 00:10:47.369 "data_size": 63488 00:10:47.369 } 00:10:47.369 ] 00:10:47.369 } 00:10:47.369 } 00:10:47.369 }' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.369 BaseBdev2' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.369 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.631 [2024-11-26 20:23:40.922560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.631 [2024-11-26 20:23:40.922604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.631 [2024-11-26 20:23:40.922664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.631 "name": "Existed_Raid", 00:10:47.631 "uuid": "b9a3de6f-0e40-4c96-86f6-39ecc20c62b3", 00:10:47.631 "strip_size_kb": 64, 00:10:47.631 "state": "offline", 00:10:47.631 "raid_level": "concat", 00:10:47.631 "superblock": true, 00:10:47.631 "num_base_bdevs": 2, 00:10:47.631 "num_base_bdevs_discovered": 1, 00:10:47.631 "num_base_bdevs_operational": 1, 00:10:47.631 "base_bdevs_list": [ 00:10:47.631 { 00:10:47.631 "name": null, 00:10:47.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.631 "is_configured": false, 00:10:47.631 "data_offset": 0, 00:10:47.631 "data_size": 63488 00:10:47.631 }, 00:10:47.631 { 00:10:47.631 "name": "BaseBdev2", 00:10:47.631 "uuid": "5eb4d55a-7bf2-419b-9b8a-aab79bde8117", 00:10:47.631 "is_configured": true, 00:10:47.631 "data_offset": 2048, 00:10:47.631 "data_size": 63488 00:10:47.631 } 00:10:47.631 ] 00:10:47.631 }' 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.631 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.199 [2024-11-26 20:23:41.586886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.199 [2024-11-26 20:23:41.586951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.199 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62192 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62192 ']' 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62192 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62192 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62192' 00:10:48.457 killing process with pid 62192 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62192 00:10:48.457 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62192 00:10:48.457 [2024-11-26 20:23:41.805365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.457 [2024-11-26 20:23:41.826904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.834 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:49.834 00:10:49.834 real 0m5.675s 00:10:49.834 user 0m8.128s 00:10:49.834 sys 0m0.908s 00:10:49.834 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.834 ************************************ 00:10:49.834 END TEST raid_state_function_test_sb 00:10:49.834 ************************************ 00:10:49.834 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.834 20:23:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:49.834 20:23:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.834 20:23:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.834 20:23:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.834 ************************************ 00:10:49.834 START TEST raid_superblock_test 00:10:49.834 ************************************ 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62450 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62450 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62450 ']' 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.834 20:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.834 [2024-11-26 20:23:43.354120] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:49.834 [2024-11-26 20:23:43.354380] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62450 ] 00:10:50.092 [2024-11-26 20:23:43.517434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.350 [2024-11-26 20:23:43.654833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.351 [2024-11-26 20:23:43.901234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.351 [2024-11-26 20:23:43.901406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 malloc1 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 [2024-11-26 20:23:44.352063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.956 [2024-11-26 20:23:44.352187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.956 [2024-11-26 20:23:44.352260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:50.956 [2024-11-26 20:23:44.352306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.956 [2024-11-26 20:23:44.354809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.956 [2024-11-26 20:23:44.354899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.956 pt1 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 malloc2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 [2024-11-26 20:23:44.418977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.956 [2024-11-26 20:23:44.419093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.956 [2024-11-26 20:23:44.419155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:50.956 [2024-11-26 20:23:44.419193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.956 [2024-11-26 20:23:44.421691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.956 [2024-11-26 20:23:44.421774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.956 pt2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 [2024-11-26 20:23:44.431017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.956 [2024-11-26 20:23:44.433126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.956 [2024-11-26 20:23:44.433400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:50.956 [2024-11-26 20:23:44.433421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:50.956 [2024-11-26 20:23:44.433713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:50.956 [2024-11-26 20:23:44.433878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:50.956 [2024-11-26 20:23:44.433892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:50.956 [2024-11-26 20:23:44.434074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.956 "name": "raid_bdev1", 00:10:50.956 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:50.956 "strip_size_kb": 64, 00:10:50.956 "state": "online", 00:10:50.956 "raid_level": "concat", 00:10:50.956 "superblock": true, 00:10:50.956 "num_base_bdevs": 2, 00:10:50.956 "num_base_bdevs_discovered": 2, 00:10:50.956 "num_base_bdevs_operational": 2, 00:10:50.956 "base_bdevs_list": [ 00:10:50.956 { 00:10:50.956 "name": "pt1", 00:10:50.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.956 "is_configured": true, 00:10:50.956 "data_offset": 2048, 00:10:50.956 "data_size": 63488 00:10:50.956 }, 00:10:50.956 { 00:10:50.956 "name": "pt2", 00:10:50.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.956 "is_configured": true, 00:10:50.956 "data_offset": 2048, 00:10:50.956 "data_size": 63488 00:10:50.956 } 00:10:50.956 ] 00:10:50.956 }' 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.956 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.550 [2024-11-26 20:23:44.926505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.550 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.550 "name": "raid_bdev1", 00:10:51.550 "aliases": [ 00:10:51.550 "99a417da-a522-4f39-a4f0-a98ed9dd31ac" 00:10:51.550 ], 00:10:51.550 "product_name": "Raid Volume", 00:10:51.550 "block_size": 512, 00:10:51.550 "num_blocks": 126976, 00:10:51.550 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:51.550 "assigned_rate_limits": { 00:10:51.550 "rw_ios_per_sec": 0, 00:10:51.550 "rw_mbytes_per_sec": 0, 00:10:51.550 "r_mbytes_per_sec": 0, 00:10:51.550 "w_mbytes_per_sec": 0 00:10:51.550 }, 00:10:51.550 "claimed": false, 00:10:51.550 "zoned": false, 00:10:51.550 "supported_io_types": { 00:10:51.550 "read": true, 00:10:51.551 "write": true, 00:10:51.551 "unmap": true, 00:10:51.551 "flush": true, 00:10:51.551 "reset": true, 00:10:51.551 "nvme_admin": false, 00:10:51.551 "nvme_io": false, 00:10:51.551 "nvme_io_md": false, 00:10:51.551 "write_zeroes": true, 00:10:51.551 "zcopy": false, 00:10:51.551 "get_zone_info": false, 00:10:51.551 "zone_management": false, 00:10:51.551 "zone_append": false, 00:10:51.551 "compare": false, 00:10:51.551 "compare_and_write": false, 00:10:51.551 "abort": false, 00:10:51.551 "seek_hole": false, 00:10:51.551 "seek_data": false, 00:10:51.551 "copy": false, 00:10:51.551 "nvme_iov_md": false 00:10:51.551 }, 00:10:51.551 "memory_domains": [ 00:10:51.551 { 00:10:51.551 "dma_device_id": "system", 00:10:51.551 "dma_device_type": 1 00:10:51.551 }, 00:10:51.551 { 00:10:51.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.551 "dma_device_type": 2 00:10:51.551 }, 00:10:51.551 { 00:10:51.551 "dma_device_id": "system", 00:10:51.551 "dma_device_type": 1 00:10:51.551 }, 00:10:51.551 { 00:10:51.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.551 "dma_device_type": 2 00:10:51.551 } 00:10:51.551 ], 00:10:51.551 "driver_specific": { 00:10:51.551 "raid": { 00:10:51.551 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:51.551 "strip_size_kb": 64, 00:10:51.551 "state": "online", 00:10:51.551 "raid_level": "concat", 00:10:51.551 "superblock": true, 00:10:51.551 "num_base_bdevs": 2, 00:10:51.551 "num_base_bdevs_discovered": 2, 00:10:51.551 "num_base_bdevs_operational": 2, 00:10:51.551 "base_bdevs_list": [ 00:10:51.551 { 00:10:51.551 "name": "pt1", 00:10:51.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.551 "is_configured": true, 00:10:51.551 "data_offset": 2048, 00:10:51.551 "data_size": 63488 00:10:51.551 }, 00:10:51.551 { 00:10:51.551 "name": "pt2", 00:10:51.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.551 "is_configured": true, 00:10:51.551 "data_offset": 2048, 00:10:51.551 "data_size": 63488 00:10:51.551 } 00:10:51.551 ] 00:10:51.551 } 00:10:51.551 } 00:10:51.551 }' 00:10:51.551 20:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.551 pt2' 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.551 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.815 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.815 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 [2024-11-26 20:23:45.178062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=99a417da-a522-4f39-a4f0-a98ed9dd31ac 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 99a417da-a522-4f39-a4f0-a98ed9dd31ac ']' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 [2024-11-26 20:23:45.229616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.816 [2024-11-26 20:23:45.229711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.816 [2024-11-26 20:23:45.229829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.816 [2024-11-26 20:23:45.229888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.816 [2024-11-26 20:23:45.229902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 [2024-11-26 20:23:45.365451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.076 [2024-11-26 20:23:45.367644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:52.076 [2024-11-26 20:23:45.367722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:52.076 [2024-11-26 20:23:45.367789] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:52.076 [2024-11-26 20:23:45.367813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.076 [2024-11-26 20:23:45.367826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:52.076 request: 00:10:52.076 { 00:10:52.076 "name": "raid_bdev1", 00:10:52.076 "raid_level": "concat", 00:10:52.076 "base_bdevs": [ 00:10:52.076 "malloc1", 00:10:52.076 "malloc2" 00:10:52.076 ], 00:10:52.076 "strip_size_kb": 64, 00:10:52.076 "superblock": false, 00:10:52.076 "method": "bdev_raid_create", 00:10:52.076 "req_id": 1 00:10:52.076 } 00:10:52.076 Got JSON-RPC error response 00:10:52.076 response: 00:10:52.076 { 00:10:52.076 "code": -17, 00:10:52.076 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:52.076 } 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 [2024-11-26 20:23:45.429416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.076 [2024-11-26 20:23:45.429555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.076 [2024-11-26 20:23:45.429610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:52.076 [2024-11-26 20:23:45.429651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.076 [2024-11-26 20:23:45.432235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.076 [2024-11-26 20:23:45.432359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.076 [2024-11-26 20:23:45.432512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:52.076 [2024-11-26 20:23:45.432635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.076 pt1 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.076 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.076 "name": "raid_bdev1", 00:10:52.076 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:52.077 "strip_size_kb": 64, 00:10:52.077 "state": "configuring", 00:10:52.077 "raid_level": "concat", 00:10:52.077 "superblock": true, 00:10:52.077 "num_base_bdevs": 2, 00:10:52.077 "num_base_bdevs_discovered": 1, 00:10:52.077 "num_base_bdevs_operational": 2, 00:10:52.077 "base_bdevs_list": [ 00:10:52.077 { 00:10:52.077 "name": "pt1", 00:10:52.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.077 "is_configured": true, 00:10:52.077 "data_offset": 2048, 00:10:52.077 "data_size": 63488 00:10:52.077 }, 00:10:52.077 { 00:10:52.077 "name": null, 00:10:52.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.077 "is_configured": false, 00:10:52.077 "data_offset": 2048, 00:10:52.077 "data_size": 63488 00:10:52.077 } 00:10:52.077 ] 00:10:52.077 }' 00:10:52.077 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.077 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.645 [2024-11-26 20:23:45.940671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.645 [2024-11-26 20:23:45.940759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.645 [2024-11-26 20:23:45.940784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:52.645 [2024-11-26 20:23:45.940797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.645 [2024-11-26 20:23:45.941336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.645 [2024-11-26 20:23:45.941425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.645 [2024-11-26 20:23:45.941530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.645 [2024-11-26 20:23:45.941561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.645 [2024-11-26 20:23:45.941709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:52.645 [2024-11-26 20:23:45.941722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:52.645 [2024-11-26 20:23:45.942008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:52.645 [2024-11-26 20:23:45.942159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:52.645 [2024-11-26 20:23:45.942168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:52.645 [2024-11-26 20:23:45.942359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.645 pt2 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.645 "name": "raid_bdev1", 00:10:52.645 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:52.645 "strip_size_kb": 64, 00:10:52.645 "state": "online", 00:10:52.645 "raid_level": "concat", 00:10:52.645 "superblock": true, 00:10:52.645 "num_base_bdevs": 2, 00:10:52.645 "num_base_bdevs_discovered": 2, 00:10:52.645 "num_base_bdevs_operational": 2, 00:10:52.645 "base_bdevs_list": [ 00:10:52.645 { 00:10:52.645 "name": "pt1", 00:10:52.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.645 "is_configured": true, 00:10:52.645 "data_offset": 2048, 00:10:52.645 "data_size": 63488 00:10:52.645 }, 00:10:52.645 { 00:10:52.645 "name": "pt2", 00:10:52.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.645 "is_configured": true, 00:10:52.645 "data_offset": 2048, 00:10:52.645 "data_size": 63488 00:10:52.645 } 00:10:52.645 ] 00:10:52.645 }' 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.645 20:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.906 [2024-11-26 20:23:46.364231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.906 "name": "raid_bdev1", 00:10:52.906 "aliases": [ 00:10:52.906 "99a417da-a522-4f39-a4f0-a98ed9dd31ac" 00:10:52.906 ], 00:10:52.906 "product_name": "Raid Volume", 00:10:52.906 "block_size": 512, 00:10:52.906 "num_blocks": 126976, 00:10:52.906 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:52.906 "assigned_rate_limits": { 00:10:52.906 "rw_ios_per_sec": 0, 00:10:52.906 "rw_mbytes_per_sec": 0, 00:10:52.906 "r_mbytes_per_sec": 0, 00:10:52.906 "w_mbytes_per_sec": 0 00:10:52.906 }, 00:10:52.906 "claimed": false, 00:10:52.906 "zoned": false, 00:10:52.906 "supported_io_types": { 00:10:52.906 "read": true, 00:10:52.906 "write": true, 00:10:52.906 "unmap": true, 00:10:52.906 "flush": true, 00:10:52.906 "reset": true, 00:10:52.906 "nvme_admin": false, 00:10:52.906 "nvme_io": false, 00:10:52.906 "nvme_io_md": false, 00:10:52.906 "write_zeroes": true, 00:10:52.906 "zcopy": false, 00:10:52.906 "get_zone_info": false, 00:10:52.906 "zone_management": false, 00:10:52.906 "zone_append": false, 00:10:52.906 "compare": false, 00:10:52.906 "compare_and_write": false, 00:10:52.906 "abort": false, 00:10:52.906 "seek_hole": false, 00:10:52.906 "seek_data": false, 00:10:52.906 "copy": false, 00:10:52.906 "nvme_iov_md": false 00:10:52.906 }, 00:10:52.906 "memory_domains": [ 00:10:52.906 { 00:10:52.906 "dma_device_id": "system", 00:10:52.906 "dma_device_type": 1 00:10:52.906 }, 00:10:52.906 { 00:10:52.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.906 "dma_device_type": 2 00:10:52.906 }, 00:10:52.906 { 00:10:52.906 "dma_device_id": "system", 00:10:52.906 "dma_device_type": 1 00:10:52.906 }, 00:10:52.906 { 00:10:52.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.906 "dma_device_type": 2 00:10:52.906 } 00:10:52.906 ], 00:10:52.906 "driver_specific": { 00:10:52.906 "raid": { 00:10:52.906 "uuid": "99a417da-a522-4f39-a4f0-a98ed9dd31ac", 00:10:52.906 "strip_size_kb": 64, 00:10:52.906 "state": "online", 00:10:52.906 "raid_level": "concat", 00:10:52.906 "superblock": true, 00:10:52.906 "num_base_bdevs": 2, 00:10:52.906 "num_base_bdevs_discovered": 2, 00:10:52.906 "num_base_bdevs_operational": 2, 00:10:52.906 "base_bdevs_list": [ 00:10:52.906 { 00:10:52.906 "name": "pt1", 00:10:52.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.906 "is_configured": true, 00:10:52.906 "data_offset": 2048, 00:10:52.906 "data_size": 63488 00:10:52.906 }, 00:10:52.906 { 00:10:52.906 "name": "pt2", 00:10:52.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.906 "is_configured": true, 00:10:52.906 "data_offset": 2048, 00:10:52.906 "data_size": 63488 00:10:52.906 } 00:10:52.906 ] 00:10:52.906 } 00:10:52.906 } 00:10:52.906 }' 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.906 pt2' 00:10:52.906 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.166 [2024-11-26 20:23:46.583851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 99a417da-a522-4f39-a4f0-a98ed9dd31ac '!=' 99a417da-a522-4f39-a4f0-a98ed9dd31ac ']' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62450 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62450 ']' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62450 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62450 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62450' 00:10:53.166 killing process with pid 62450 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62450 00:10:53.166 [2024-11-26 20:23:46.652326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.166 [2024-11-26 20:23:46.652500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.166 20:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62450 00:10:53.166 [2024-11-26 20:23:46.652602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.166 [2024-11-26 20:23:46.652620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:53.426 [2024-11-26 20:23:46.906555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.804 20:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:54.804 ************************************ 00:10:54.804 END TEST raid_superblock_test 00:10:54.804 ************************************ 00:10:54.804 00:10:54.804 real 0m5.012s 00:10:54.804 user 0m7.000s 00:10:54.804 sys 0m0.787s 00:10:54.804 20:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.804 20:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.804 20:23:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:54.804 20:23:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.804 20:23:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.804 20:23:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.804 ************************************ 00:10:54.804 START TEST raid_read_error_test 00:10:54.804 ************************************ 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:54.804 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PwzakkW453 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62667 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62667 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62667 ']' 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.073 20:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.073 [2024-11-26 20:23:48.456462] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:55.073 [2024-11-26 20:23:48.456605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:10:55.334 [2024-11-26 20:23:48.638035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.334 [2024-11-26 20:23:48.773069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.593 [2024-11-26 20:23:49.015456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.593 [2024-11-26 20:23:49.015554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.852 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.852 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.852 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.852 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.852 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.852 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 BaseBdev1_malloc 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 true 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 [2024-11-26 20:23:49.434571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:56.112 [2024-11-26 20:23:49.434733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.112 [2024-11-26 20:23:49.434769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:56.112 [2024-11-26 20:23:49.434786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.112 [2024-11-26 20:23:49.437385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.112 [2024-11-26 20:23:49.437432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.112 BaseBdev1 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 BaseBdev2_malloc 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 true 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 [2024-11-26 20:23:49.508329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:56.112 [2024-11-26 20:23:49.508402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.112 [2024-11-26 20:23:49.508425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:56.112 [2024-11-26 20:23:49.508437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.112 [2024-11-26 20:23:49.510951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.112 [2024-11-26 20:23:49.511000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:56.112 BaseBdev2 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.112 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.112 [2024-11-26 20:23:49.520359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.112 [2024-11-26 20:23:49.522505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.112 [2024-11-26 20:23:49.522745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.112 [2024-11-26 20:23:49.522764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:56.112 [2024-11-26 20:23:49.523058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:56.112 [2024-11-26 20:23:49.523305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.112 [2024-11-26 20:23:49.523322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:56.112 [2024-11-26 20:23:49.523537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.113 "name": "raid_bdev1", 00:10:56.113 "uuid": "385f5736-049c-4781-b9f6-ecada5fc6ec0", 00:10:56.113 "strip_size_kb": 64, 00:10:56.113 "state": "online", 00:10:56.113 "raid_level": "concat", 00:10:56.113 "superblock": true, 00:10:56.113 "num_base_bdevs": 2, 00:10:56.113 "num_base_bdevs_discovered": 2, 00:10:56.113 "num_base_bdevs_operational": 2, 00:10:56.113 "base_bdevs_list": [ 00:10:56.113 { 00:10:56.113 "name": "BaseBdev1", 00:10:56.113 "uuid": "5c727abf-71d8-5a3b-9cd4-e3be1fa124f1", 00:10:56.113 "is_configured": true, 00:10:56.113 "data_offset": 2048, 00:10:56.113 "data_size": 63488 00:10:56.113 }, 00:10:56.113 { 00:10:56.113 "name": "BaseBdev2", 00:10:56.113 "uuid": "579ba960-56b1-57b1-8410-5465562e2fb4", 00:10:56.113 "is_configured": true, 00:10:56.113 "data_offset": 2048, 00:10:56.113 "data_size": 63488 00:10:56.113 } 00:10:56.113 ] 00:10:56.113 }' 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.113 20:23:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.680 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.680 20:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.680 [2024-11-26 20:23:50.073185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.620 20:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.620 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.620 20:23:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.620 "name": "raid_bdev1", 00:10:57.620 "uuid": "385f5736-049c-4781-b9f6-ecada5fc6ec0", 00:10:57.620 "strip_size_kb": 64, 00:10:57.620 "state": "online", 00:10:57.620 "raid_level": "concat", 00:10:57.620 "superblock": true, 00:10:57.620 "num_base_bdevs": 2, 00:10:57.620 "num_base_bdevs_discovered": 2, 00:10:57.620 "num_base_bdevs_operational": 2, 00:10:57.620 "base_bdevs_list": [ 00:10:57.620 { 00:10:57.620 "name": "BaseBdev1", 00:10:57.620 "uuid": "5c727abf-71d8-5a3b-9cd4-e3be1fa124f1", 00:10:57.620 "is_configured": true, 00:10:57.620 "data_offset": 2048, 00:10:57.620 "data_size": 63488 00:10:57.620 }, 00:10:57.620 { 00:10:57.620 "name": "BaseBdev2", 00:10:57.620 "uuid": "579ba960-56b1-57b1-8410-5465562e2fb4", 00:10:57.620 "is_configured": true, 00:10:57.620 "data_offset": 2048, 00:10:57.620 "data_size": 63488 00:10:57.620 } 00:10:57.620 ] 00:10:57.620 }' 00:10:57.620 20:23:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.620 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.218 20:23:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.218 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.218 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.218 [2024-11-26 20:23:51.494771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.218 [2024-11-26 20:23:51.494893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.219 [2024-11-26 20:23:51.498267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.219 [2024-11-26 20:23:51.498364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.219 [2024-11-26 20:23:51.498424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.219 [2024-11-26 20:23:51.498484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:58.219 { 00:10:58.219 "results": [ 00:10:58.219 { 00:10:58.219 "job": "raid_bdev1", 00:10:58.219 "core_mask": "0x1", 00:10:58.219 "workload": "randrw", 00:10:58.219 "percentage": 50, 00:10:58.219 "status": "finished", 00:10:58.219 "queue_depth": 1, 00:10:58.219 "io_size": 131072, 00:10:58.219 "runtime": 1.422276, 00:10:58.219 "iops": 13293.481715222642, 00:10:58.219 "mibps": 1661.6852144028303, 00:10:58.219 "io_failed": 1, 00:10:58.219 "io_timeout": 0, 00:10:58.219 "avg_latency_us": 103.94682595477249, 00:10:58.219 "min_latency_us": 30.63056768558952, 00:10:58.219 "max_latency_us": 1774.3371179039302 00:10:58.219 } 00:10:58.219 ], 00:10:58.219 "core_count": 1 00:10:58.219 } 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62667 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62667 ']' 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62667 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62667 00:10:58.219 killing process with pid 62667 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62667' 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62667 00:10:58.219 [2024-11-26 20:23:51.540648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.219 20:23:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62667 00:10:58.219 [2024-11-26 20:23:51.707506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.598 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PwzakkW453 00:10:59.598 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:59.598 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:59.857 ************************************ 00:10:59.857 END TEST raid_read_error_test 00:10:59.857 ************************************ 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:59.857 00:10:59.857 real 0m4.817s 00:10:59.857 user 0m5.821s 00:10:59.857 sys 0m0.569s 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.857 20:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.857 20:23:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:59.857 20:23:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:59.857 20:23:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.857 20:23:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.857 ************************************ 00:10:59.857 START TEST raid_write_error_test 00:10:59.857 ************************************ 00:10:59.857 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:10:59.857 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:59.857 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.76zhn1QQBv 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62807 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62807 00:10:59.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62807 ']' 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.858 20:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.858 [2024-11-26 20:23:53.327191] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:10:59.858 [2024-11-26 20:23:53.327330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62807 ] 00:11:00.117 [2024-11-26 20:23:53.492862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.117 [2024-11-26 20:23:53.636836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.376 [2024-11-26 20:23:53.881756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.376 [2024-11-26 20:23:53.881904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 BaseBdev1_malloc 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 true 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 [2024-11-26 20:23:54.317460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:00.945 [2024-11-26 20:23:54.317530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.945 [2024-11-26 20:23:54.317556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:00.945 [2024-11-26 20:23:54.317569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.945 [2024-11-26 20:23:54.320051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.945 [2024-11-26 20:23:54.320185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.945 BaseBdev1 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 BaseBdev2_malloc 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 true 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 [2024-11-26 20:23:54.384663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.945 [2024-11-26 20:23:54.384729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.945 [2024-11-26 20:23:54.384751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:00.945 [2024-11-26 20:23:54.384764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.945 [2024-11-26 20:23:54.387201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.945 [2024-11-26 20:23:54.387262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.945 BaseBdev2 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 [2024-11-26 20:23:54.400726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.945 [2024-11-26 20:23:54.402948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.945 [2024-11-26 20:23:54.403274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.945 [2024-11-26 20:23:54.403338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:00.945 [2024-11-26 20:23:54.403668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:00.945 [2024-11-26 20:23:54.403932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.945 [2024-11-26 20:23:54.403985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:00.945 [2024-11-26 20:23:54.404194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.945 "name": "raid_bdev1", 00:11:00.945 "uuid": "89fa8493-739e-42fb-ac48-79ac701880dd", 00:11:00.945 "strip_size_kb": 64, 00:11:00.945 "state": "online", 00:11:00.945 "raid_level": "concat", 00:11:00.945 "superblock": true, 00:11:00.945 "num_base_bdevs": 2, 00:11:00.945 "num_base_bdevs_discovered": 2, 00:11:00.945 "num_base_bdevs_operational": 2, 00:11:00.945 "base_bdevs_list": [ 00:11:00.945 { 00:11:00.945 "name": "BaseBdev1", 00:11:00.945 "uuid": "033a6810-a44c-5fbb-98f0-91ac9e161567", 00:11:00.945 "is_configured": true, 00:11:00.945 "data_offset": 2048, 00:11:00.945 "data_size": 63488 00:11:00.945 }, 00:11:00.945 { 00:11:00.945 "name": "BaseBdev2", 00:11:00.945 "uuid": "ae69683e-eab4-5ac6-9590-62458853571f", 00:11:00.945 "is_configured": true, 00:11:00.945 "data_offset": 2048, 00:11:00.945 "data_size": 63488 00:11:00.945 } 00:11:00.945 ] 00:11:00.945 }' 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.945 20:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.512 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:01.512 20:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:01.512 [2024-11-26 20:23:55.017192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.455 "name": "raid_bdev1", 00:11:02.455 "uuid": "89fa8493-739e-42fb-ac48-79ac701880dd", 00:11:02.455 "strip_size_kb": 64, 00:11:02.455 "state": "online", 00:11:02.455 "raid_level": "concat", 00:11:02.455 "superblock": true, 00:11:02.455 "num_base_bdevs": 2, 00:11:02.455 "num_base_bdevs_discovered": 2, 00:11:02.455 "num_base_bdevs_operational": 2, 00:11:02.455 "base_bdevs_list": [ 00:11:02.455 { 00:11:02.455 "name": "BaseBdev1", 00:11:02.455 "uuid": "033a6810-a44c-5fbb-98f0-91ac9e161567", 00:11:02.455 "is_configured": true, 00:11:02.455 "data_offset": 2048, 00:11:02.455 "data_size": 63488 00:11:02.455 }, 00:11:02.455 { 00:11:02.455 "name": "BaseBdev2", 00:11:02.455 "uuid": "ae69683e-eab4-5ac6-9590-62458853571f", 00:11:02.455 "is_configured": true, 00:11:02.455 "data_offset": 2048, 00:11:02.455 "data_size": 63488 00:11:02.455 } 00:11:02.455 ] 00:11:02.455 }' 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.455 20:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.020 20:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.021 [2024-11-26 20:23:56.398271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.021 [2024-11-26 20:23:56.398310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.021 [2024-11-26 20:23:56.401567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.021 [2024-11-26 20:23:56.401695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.021 [2024-11-26 20:23:56.401745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.021 [2024-11-26 20:23:56.401761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:03.021 { 00:11:03.021 "results": [ 00:11:03.021 { 00:11:03.021 "job": "raid_bdev1", 00:11:03.021 "core_mask": "0x1", 00:11:03.021 "workload": "randrw", 00:11:03.021 "percentage": 50, 00:11:03.021 "status": "finished", 00:11:03.021 "queue_depth": 1, 00:11:03.021 "io_size": 131072, 00:11:03.021 "runtime": 1.381617, 00:11:03.021 "iops": 13406.754549198511, 00:11:03.021 "mibps": 1675.8443186498139, 00:11:03.021 "io_failed": 1, 00:11:03.021 "io_timeout": 0, 00:11:03.021 "avg_latency_us": 103.00090655436733, 00:11:03.021 "min_latency_us": 29.289082969432314, 00:11:03.021 "max_latency_us": 1781.4917030567685 00:11:03.021 } 00:11:03.021 ], 00:11:03.021 "core_count": 1 00:11:03.021 } 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62807 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62807 ']' 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62807 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62807 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62807' 00:11:03.021 killing process with pid 62807 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62807 00:11:03.021 [2024-11-26 20:23:56.451077] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.021 20:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62807 00:11:03.279 [2024-11-26 20:23:56.619007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.76zhn1QQBv 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:04.658 00:11:04.658 real 0m4.842s 00:11:04.658 user 0m5.862s 00:11:04.658 sys 0m0.570s 00:11:04.658 ************************************ 00:11:04.658 END TEST raid_write_error_test 00:11:04.658 ************************************ 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.658 20:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.658 20:23:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:04.658 20:23:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:04.658 20:23:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:04.658 20:23:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.658 20:23:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.658 ************************************ 00:11:04.658 START TEST raid_state_function_test 00:11:04.658 ************************************ 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62956 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62956' 00:11:04.658 Process raid pid: 62956 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62956 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62956 ']' 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.658 20:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.917 [2024-11-26 20:23:58.234870] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:04.917 [2024-11-26 20:23:58.235608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.917 [2024-11-26 20:23:58.416725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.175 [2024-11-26 20:23:58.555550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.432 [2024-11-26 20:23:58.808289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.432 [2024-11-26 20:23:58.808345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 [2024-11-26 20:23:59.156924] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.723 [2024-11-26 20:23:59.156984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.723 [2024-11-26 20:23:59.156996] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.723 [2024-11-26 20:23:59.157007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.723 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.724 "name": "Existed_Raid", 00:11:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.724 "strip_size_kb": 0, 00:11:05.724 "state": "configuring", 00:11:05.724 "raid_level": "raid1", 00:11:05.724 "superblock": false, 00:11:05.724 "num_base_bdevs": 2, 00:11:05.724 "num_base_bdevs_discovered": 0, 00:11:05.724 "num_base_bdevs_operational": 2, 00:11:05.724 "base_bdevs_list": [ 00:11:05.724 { 00:11:05.724 "name": "BaseBdev1", 00:11:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.724 "is_configured": false, 00:11:05.724 "data_offset": 0, 00:11:05.724 "data_size": 0 00:11:05.724 }, 00:11:05.724 { 00:11:05.724 "name": "BaseBdev2", 00:11:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.724 "is_configured": false, 00:11:05.724 "data_offset": 0, 00:11:05.724 "data_size": 0 00:11:05.724 } 00:11:05.724 ] 00:11:05.724 }' 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.724 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.290 [2024-11-26 20:23:59.648103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.290 [2024-11-26 20:23:59.648145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.290 [2024-11-26 20:23:59.656073] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.290 [2024-11-26 20:23:59.656125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.290 [2024-11-26 20:23:59.656136] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.290 [2024-11-26 20:23:59.656150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.290 [2024-11-26 20:23:59.707680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.290 BaseBdev1 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.290 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.291 [ 00:11:06.291 { 00:11:06.291 "name": "BaseBdev1", 00:11:06.291 "aliases": [ 00:11:06.291 "7d6c135e-be7f-4339-9f7f-f03eb406b4d8" 00:11:06.291 ], 00:11:06.291 "product_name": "Malloc disk", 00:11:06.291 "block_size": 512, 00:11:06.291 "num_blocks": 65536, 00:11:06.291 "uuid": "7d6c135e-be7f-4339-9f7f-f03eb406b4d8", 00:11:06.291 "assigned_rate_limits": { 00:11:06.291 "rw_ios_per_sec": 0, 00:11:06.291 "rw_mbytes_per_sec": 0, 00:11:06.291 "r_mbytes_per_sec": 0, 00:11:06.291 "w_mbytes_per_sec": 0 00:11:06.291 }, 00:11:06.291 "claimed": true, 00:11:06.291 "claim_type": "exclusive_write", 00:11:06.291 "zoned": false, 00:11:06.291 "supported_io_types": { 00:11:06.291 "read": true, 00:11:06.291 "write": true, 00:11:06.291 "unmap": true, 00:11:06.291 "flush": true, 00:11:06.291 "reset": true, 00:11:06.291 "nvme_admin": false, 00:11:06.291 "nvme_io": false, 00:11:06.291 "nvme_io_md": false, 00:11:06.291 "write_zeroes": true, 00:11:06.291 "zcopy": true, 00:11:06.291 "get_zone_info": false, 00:11:06.291 "zone_management": false, 00:11:06.291 "zone_append": false, 00:11:06.291 "compare": false, 00:11:06.291 "compare_and_write": false, 00:11:06.291 "abort": true, 00:11:06.291 "seek_hole": false, 00:11:06.291 "seek_data": false, 00:11:06.291 "copy": true, 00:11:06.291 "nvme_iov_md": false 00:11:06.291 }, 00:11:06.291 "memory_domains": [ 00:11:06.291 { 00:11:06.291 "dma_device_id": "system", 00:11:06.291 "dma_device_type": 1 00:11:06.291 }, 00:11:06.291 { 00:11:06.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.291 "dma_device_type": 2 00:11:06.291 } 00:11:06.291 ], 00:11:06.291 "driver_specific": {} 00:11:06.291 } 00:11:06.291 ] 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.291 "name": "Existed_Raid", 00:11:06.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.291 "strip_size_kb": 0, 00:11:06.291 "state": "configuring", 00:11:06.291 "raid_level": "raid1", 00:11:06.291 "superblock": false, 00:11:06.291 "num_base_bdevs": 2, 00:11:06.291 "num_base_bdevs_discovered": 1, 00:11:06.291 "num_base_bdevs_operational": 2, 00:11:06.291 "base_bdevs_list": [ 00:11:06.291 { 00:11:06.291 "name": "BaseBdev1", 00:11:06.291 "uuid": "7d6c135e-be7f-4339-9f7f-f03eb406b4d8", 00:11:06.291 "is_configured": true, 00:11:06.291 "data_offset": 0, 00:11:06.291 "data_size": 65536 00:11:06.291 }, 00:11:06.291 { 00:11:06.291 "name": "BaseBdev2", 00:11:06.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.291 "is_configured": false, 00:11:06.291 "data_offset": 0, 00:11:06.291 "data_size": 0 00:11:06.291 } 00:11:06.291 ] 00:11:06.291 }' 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.291 20:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.860 [2024-11-26 20:24:00.210916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.860 [2024-11-26 20:24:00.211041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.860 [2024-11-26 20:24:00.222946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.860 [2024-11-26 20:24:00.225129] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.860 [2024-11-26 20:24:00.225227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.860 "name": "Existed_Raid", 00:11:06.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.860 "strip_size_kb": 0, 00:11:06.860 "state": "configuring", 00:11:06.860 "raid_level": "raid1", 00:11:06.860 "superblock": false, 00:11:06.860 "num_base_bdevs": 2, 00:11:06.860 "num_base_bdevs_discovered": 1, 00:11:06.860 "num_base_bdevs_operational": 2, 00:11:06.860 "base_bdevs_list": [ 00:11:06.860 { 00:11:06.860 "name": "BaseBdev1", 00:11:06.860 "uuid": "7d6c135e-be7f-4339-9f7f-f03eb406b4d8", 00:11:06.860 "is_configured": true, 00:11:06.860 "data_offset": 0, 00:11:06.860 "data_size": 65536 00:11:06.860 }, 00:11:06.860 { 00:11:06.860 "name": "BaseBdev2", 00:11:06.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.860 "is_configured": false, 00:11:06.860 "data_offset": 0, 00:11:06.860 "data_size": 0 00:11:06.860 } 00:11:06.860 ] 00:11:06.860 }' 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.860 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.119 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:07.119 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.119 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 [2024-11-26 20:24:00.714632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.378 [2024-11-26 20:24:00.714704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.378 [2024-11-26 20:24:00.714714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:07.378 [2024-11-26 20:24:00.715009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:07.378 [2024-11-26 20:24:00.715206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.378 [2024-11-26 20:24:00.715222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.378 [2024-11-26 20:24:00.715557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.378 BaseBdev2 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 [ 00:11:07.378 { 00:11:07.378 "name": "BaseBdev2", 00:11:07.378 "aliases": [ 00:11:07.378 "28de6771-be1d-421f-a63f-db8129e296bc" 00:11:07.378 ], 00:11:07.378 "product_name": "Malloc disk", 00:11:07.378 "block_size": 512, 00:11:07.378 "num_blocks": 65536, 00:11:07.378 "uuid": "28de6771-be1d-421f-a63f-db8129e296bc", 00:11:07.378 "assigned_rate_limits": { 00:11:07.378 "rw_ios_per_sec": 0, 00:11:07.378 "rw_mbytes_per_sec": 0, 00:11:07.378 "r_mbytes_per_sec": 0, 00:11:07.378 "w_mbytes_per_sec": 0 00:11:07.378 }, 00:11:07.378 "claimed": true, 00:11:07.378 "claim_type": "exclusive_write", 00:11:07.378 "zoned": false, 00:11:07.378 "supported_io_types": { 00:11:07.378 "read": true, 00:11:07.378 "write": true, 00:11:07.378 "unmap": true, 00:11:07.378 "flush": true, 00:11:07.378 "reset": true, 00:11:07.378 "nvme_admin": false, 00:11:07.378 "nvme_io": false, 00:11:07.378 "nvme_io_md": false, 00:11:07.378 "write_zeroes": true, 00:11:07.378 "zcopy": true, 00:11:07.378 "get_zone_info": false, 00:11:07.378 "zone_management": false, 00:11:07.378 "zone_append": false, 00:11:07.378 "compare": false, 00:11:07.378 "compare_and_write": false, 00:11:07.378 "abort": true, 00:11:07.378 "seek_hole": false, 00:11:07.378 "seek_data": false, 00:11:07.378 "copy": true, 00:11:07.378 "nvme_iov_md": false 00:11:07.378 }, 00:11:07.378 "memory_domains": [ 00:11:07.378 { 00:11:07.378 "dma_device_id": "system", 00:11:07.378 "dma_device_type": 1 00:11:07.378 }, 00:11:07.378 { 00:11:07.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.378 "dma_device_type": 2 00:11:07.378 } 00:11:07.378 ], 00:11:07.378 "driver_specific": {} 00:11:07.378 } 00:11:07.378 ] 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.378 "name": "Existed_Raid", 00:11:07.378 "uuid": "71522646-9ac5-4053-912c-7a5d5ca3c0e1", 00:11:07.378 "strip_size_kb": 0, 00:11:07.378 "state": "online", 00:11:07.378 "raid_level": "raid1", 00:11:07.378 "superblock": false, 00:11:07.378 "num_base_bdevs": 2, 00:11:07.378 "num_base_bdevs_discovered": 2, 00:11:07.378 "num_base_bdevs_operational": 2, 00:11:07.378 "base_bdevs_list": [ 00:11:07.378 { 00:11:07.378 "name": "BaseBdev1", 00:11:07.378 "uuid": "7d6c135e-be7f-4339-9f7f-f03eb406b4d8", 00:11:07.378 "is_configured": true, 00:11:07.378 "data_offset": 0, 00:11:07.378 "data_size": 65536 00:11:07.378 }, 00:11:07.378 { 00:11:07.378 "name": "BaseBdev2", 00:11:07.378 "uuid": "28de6771-be1d-421f-a63f-db8129e296bc", 00:11:07.378 "is_configured": true, 00:11:07.378 "data_offset": 0, 00:11:07.378 "data_size": 65536 00:11:07.378 } 00:11:07.378 ] 00:11:07.378 }' 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.378 20:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.946 [2024-11-26 20:24:01.258100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.946 "name": "Existed_Raid", 00:11:07.946 "aliases": [ 00:11:07.946 "71522646-9ac5-4053-912c-7a5d5ca3c0e1" 00:11:07.946 ], 00:11:07.946 "product_name": "Raid Volume", 00:11:07.946 "block_size": 512, 00:11:07.946 "num_blocks": 65536, 00:11:07.946 "uuid": "71522646-9ac5-4053-912c-7a5d5ca3c0e1", 00:11:07.946 "assigned_rate_limits": { 00:11:07.946 "rw_ios_per_sec": 0, 00:11:07.946 "rw_mbytes_per_sec": 0, 00:11:07.946 "r_mbytes_per_sec": 0, 00:11:07.946 "w_mbytes_per_sec": 0 00:11:07.946 }, 00:11:07.946 "claimed": false, 00:11:07.946 "zoned": false, 00:11:07.946 "supported_io_types": { 00:11:07.946 "read": true, 00:11:07.946 "write": true, 00:11:07.946 "unmap": false, 00:11:07.946 "flush": false, 00:11:07.946 "reset": true, 00:11:07.946 "nvme_admin": false, 00:11:07.946 "nvme_io": false, 00:11:07.946 "nvme_io_md": false, 00:11:07.946 "write_zeroes": true, 00:11:07.946 "zcopy": false, 00:11:07.946 "get_zone_info": false, 00:11:07.946 "zone_management": false, 00:11:07.946 "zone_append": false, 00:11:07.946 "compare": false, 00:11:07.946 "compare_and_write": false, 00:11:07.946 "abort": false, 00:11:07.946 "seek_hole": false, 00:11:07.946 "seek_data": false, 00:11:07.946 "copy": false, 00:11:07.946 "nvme_iov_md": false 00:11:07.946 }, 00:11:07.946 "memory_domains": [ 00:11:07.946 { 00:11:07.946 "dma_device_id": "system", 00:11:07.946 "dma_device_type": 1 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.946 "dma_device_type": 2 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "system", 00:11:07.946 "dma_device_type": 1 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.946 "dma_device_type": 2 00:11:07.946 } 00:11:07.946 ], 00:11:07.946 "driver_specific": { 00:11:07.946 "raid": { 00:11:07.946 "uuid": "71522646-9ac5-4053-912c-7a5d5ca3c0e1", 00:11:07.946 "strip_size_kb": 0, 00:11:07.946 "state": "online", 00:11:07.946 "raid_level": "raid1", 00:11:07.946 "superblock": false, 00:11:07.946 "num_base_bdevs": 2, 00:11:07.946 "num_base_bdevs_discovered": 2, 00:11:07.946 "num_base_bdevs_operational": 2, 00:11:07.946 "base_bdevs_list": [ 00:11:07.946 { 00:11:07.946 "name": "BaseBdev1", 00:11:07.946 "uuid": "7d6c135e-be7f-4339-9f7f-f03eb406b4d8", 00:11:07.946 "is_configured": true, 00:11:07.946 "data_offset": 0, 00:11:07.946 "data_size": 65536 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "name": "BaseBdev2", 00:11:07.946 "uuid": "28de6771-be1d-421f-a63f-db8129e296bc", 00:11:07.946 "is_configured": true, 00:11:07.946 "data_offset": 0, 00:11:07.946 "data_size": 65536 00:11:07.946 } 00:11:07.946 ] 00:11:07.946 } 00:11:07.946 } 00:11:07.946 }' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.946 BaseBdev2' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.946 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.946 [2024-11-26 20:24:01.497453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.206 "name": "Existed_Raid", 00:11:08.206 "uuid": "71522646-9ac5-4053-912c-7a5d5ca3c0e1", 00:11:08.206 "strip_size_kb": 0, 00:11:08.206 "state": "online", 00:11:08.206 "raid_level": "raid1", 00:11:08.206 "superblock": false, 00:11:08.206 "num_base_bdevs": 2, 00:11:08.206 "num_base_bdevs_discovered": 1, 00:11:08.206 "num_base_bdevs_operational": 1, 00:11:08.206 "base_bdevs_list": [ 00:11:08.206 { 00:11:08.206 "name": null, 00:11:08.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.206 "is_configured": false, 00:11:08.206 "data_offset": 0, 00:11:08.206 "data_size": 65536 00:11:08.206 }, 00:11:08.206 { 00:11:08.206 "name": "BaseBdev2", 00:11:08.206 "uuid": "28de6771-be1d-421f-a63f-db8129e296bc", 00:11:08.206 "is_configured": true, 00:11:08.206 "data_offset": 0, 00:11:08.206 "data_size": 65536 00:11:08.206 } 00:11:08.206 ] 00:11:08.206 }' 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.206 20:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.773 [2024-11-26 20:24:02.136789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.773 [2024-11-26 20:24:02.136909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.773 [2024-11-26 20:24:02.254759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.773 [2024-11-26 20:24:02.254827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.773 [2024-11-26 20:24:02.254840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62956 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62956 ']' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62956 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.773 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62956 00:11:09.032 killing process with pid 62956 00:11:09.032 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.032 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.032 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62956' 00:11:09.032 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62956 00:11:09.032 [2024-11-26 20:24:02.353517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.032 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62956 00:11:09.032 [2024-11-26 20:24:02.374618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.421 ************************************ 00:11:10.421 END TEST raid_state_function_test 00:11:10.421 ************************************ 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:10.421 00:11:10.421 real 0m5.591s 00:11:10.421 user 0m8.049s 00:11:10.421 sys 0m0.850s 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 20:24:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:10.421 20:24:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:10.421 20:24:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.421 20:24:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 ************************************ 00:11:10.421 START TEST raid_state_function_test_sb 00:11:10.421 ************************************ 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.421 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63215 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63215' 00:11:10.422 Process raid pid: 63215 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63215 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63215 ']' 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.422 20:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.422 [2024-11-26 20:24:03.894975] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:10.422 [2024-11-26 20:24:03.895124] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.680 [2024-11-26 20:24:04.078716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.680 [2024-11-26 20:24:04.217288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.967 [2024-11-26 20:24:04.477040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.967 [2024-11-26 20:24:04.477092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.533 [2024-11-26 20:24:04.864786] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.533 [2024-11-26 20:24:04.864916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.533 [2024-11-26 20:24:04.864933] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.533 [2024-11-26 20:24:04.864945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.533 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.534 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.534 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.534 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.534 "name": "Existed_Raid", 00:11:11.534 "uuid": "d022c99f-bea6-420d-9cdb-0e5903966ac0", 00:11:11.534 "strip_size_kb": 0, 00:11:11.534 "state": "configuring", 00:11:11.534 "raid_level": "raid1", 00:11:11.534 "superblock": true, 00:11:11.534 "num_base_bdevs": 2, 00:11:11.534 "num_base_bdevs_discovered": 0, 00:11:11.534 "num_base_bdevs_operational": 2, 00:11:11.534 "base_bdevs_list": [ 00:11:11.534 { 00:11:11.534 "name": "BaseBdev1", 00:11:11.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.534 "is_configured": false, 00:11:11.534 "data_offset": 0, 00:11:11.534 "data_size": 0 00:11:11.534 }, 00:11:11.534 { 00:11:11.534 "name": "BaseBdev2", 00:11:11.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.534 "is_configured": false, 00:11:11.534 "data_offset": 0, 00:11:11.534 "data_size": 0 00:11:11.534 } 00:11:11.534 ] 00:11:11.534 }' 00:11:11.534 20:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.534 20:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 [2024-11-26 20:24:05.375842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.101 [2024-11-26 20:24:05.375947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 [2024-11-26 20:24:05.387816] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.101 [2024-11-26 20:24:05.387913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.101 [2024-11-26 20:24:05.387953] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.101 [2024-11-26 20:24:05.387985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 [2024-11-26 20:24:05.444189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.101 BaseBdev1 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 [ 00:11:12.101 { 00:11:12.101 "name": "BaseBdev1", 00:11:12.101 "aliases": [ 00:11:12.101 "88ebd631-ff03-4825-b45d-cb79e586654a" 00:11:12.101 ], 00:11:12.101 "product_name": "Malloc disk", 00:11:12.101 "block_size": 512, 00:11:12.101 "num_blocks": 65536, 00:11:12.101 "uuid": "88ebd631-ff03-4825-b45d-cb79e586654a", 00:11:12.101 "assigned_rate_limits": { 00:11:12.101 "rw_ios_per_sec": 0, 00:11:12.101 "rw_mbytes_per_sec": 0, 00:11:12.101 "r_mbytes_per_sec": 0, 00:11:12.101 "w_mbytes_per_sec": 0 00:11:12.101 }, 00:11:12.101 "claimed": true, 00:11:12.101 "claim_type": "exclusive_write", 00:11:12.101 "zoned": false, 00:11:12.101 "supported_io_types": { 00:11:12.101 "read": true, 00:11:12.101 "write": true, 00:11:12.101 "unmap": true, 00:11:12.101 "flush": true, 00:11:12.101 "reset": true, 00:11:12.101 "nvme_admin": false, 00:11:12.101 "nvme_io": false, 00:11:12.101 "nvme_io_md": false, 00:11:12.101 "write_zeroes": true, 00:11:12.101 "zcopy": true, 00:11:12.101 "get_zone_info": false, 00:11:12.101 "zone_management": false, 00:11:12.101 "zone_append": false, 00:11:12.101 "compare": false, 00:11:12.101 "compare_and_write": false, 00:11:12.101 "abort": true, 00:11:12.101 "seek_hole": false, 00:11:12.101 "seek_data": false, 00:11:12.101 "copy": true, 00:11:12.101 "nvme_iov_md": false 00:11:12.101 }, 00:11:12.101 "memory_domains": [ 00:11:12.101 { 00:11:12.101 "dma_device_id": "system", 00:11:12.101 "dma_device_type": 1 00:11:12.101 }, 00:11:12.101 { 00:11:12.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.101 "dma_device_type": 2 00:11:12.101 } 00:11:12.101 ], 00:11:12.101 "driver_specific": {} 00:11:12.101 } 00:11:12.101 ] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.101 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.101 "name": "Existed_Raid", 00:11:12.101 "uuid": "f3c958d3-abfc-4f98-b67d-3af523fa9818", 00:11:12.101 "strip_size_kb": 0, 00:11:12.101 "state": "configuring", 00:11:12.101 "raid_level": "raid1", 00:11:12.101 "superblock": true, 00:11:12.101 "num_base_bdevs": 2, 00:11:12.101 "num_base_bdevs_discovered": 1, 00:11:12.101 "num_base_bdevs_operational": 2, 00:11:12.101 "base_bdevs_list": [ 00:11:12.101 { 00:11:12.101 "name": "BaseBdev1", 00:11:12.102 "uuid": "88ebd631-ff03-4825-b45d-cb79e586654a", 00:11:12.102 "is_configured": true, 00:11:12.102 "data_offset": 2048, 00:11:12.102 "data_size": 63488 00:11:12.102 }, 00:11:12.102 { 00:11:12.102 "name": "BaseBdev2", 00:11:12.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.102 "is_configured": false, 00:11:12.102 "data_offset": 0, 00:11:12.102 "data_size": 0 00:11:12.102 } 00:11:12.102 ] 00:11:12.102 }' 00:11:12.102 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.102 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.669 [2024-11-26 20:24:05.967363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.669 [2024-11-26 20:24:05.967422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.669 [2024-11-26 20:24:05.975395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.669 [2024-11-26 20:24:05.977477] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.669 [2024-11-26 20:24:05.977575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.669 20:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.669 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.669 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.669 "name": "Existed_Raid", 00:11:12.669 "uuid": "9432ad7b-5e68-40d9-a0af-382575b63088", 00:11:12.669 "strip_size_kb": 0, 00:11:12.669 "state": "configuring", 00:11:12.669 "raid_level": "raid1", 00:11:12.669 "superblock": true, 00:11:12.669 "num_base_bdevs": 2, 00:11:12.669 "num_base_bdevs_discovered": 1, 00:11:12.669 "num_base_bdevs_operational": 2, 00:11:12.669 "base_bdevs_list": [ 00:11:12.669 { 00:11:12.669 "name": "BaseBdev1", 00:11:12.669 "uuid": "88ebd631-ff03-4825-b45d-cb79e586654a", 00:11:12.669 "is_configured": true, 00:11:12.669 "data_offset": 2048, 00:11:12.669 "data_size": 63488 00:11:12.669 }, 00:11:12.669 { 00:11:12.669 "name": "BaseBdev2", 00:11:12.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.669 "is_configured": false, 00:11:12.669 "data_offset": 0, 00:11:12.669 "data_size": 0 00:11:12.669 } 00:11:12.669 ] 00:11:12.669 }' 00:11:12.669 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.669 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.928 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.928 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.928 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.186 [2024-11-26 20:24:06.505327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.186 [2024-11-26 20:24:06.505746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:13.186 [2024-11-26 20:24:06.505810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.186 [2024-11-26 20:24:06.506131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:13.186 BaseBdev2 00:11:13.186 [2024-11-26 20:24:06.506383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:13.186 [2024-11-26 20:24:06.506448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:13.186 [2024-11-26 20:24:06.506663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.186 [ 00:11:13.186 { 00:11:13.186 "name": "BaseBdev2", 00:11:13.186 "aliases": [ 00:11:13.186 "a3c65ecc-3ec4-4f4a-90f2-48a98d8505a3" 00:11:13.186 ], 00:11:13.186 "product_name": "Malloc disk", 00:11:13.186 "block_size": 512, 00:11:13.186 "num_blocks": 65536, 00:11:13.186 "uuid": "a3c65ecc-3ec4-4f4a-90f2-48a98d8505a3", 00:11:13.186 "assigned_rate_limits": { 00:11:13.186 "rw_ios_per_sec": 0, 00:11:13.186 "rw_mbytes_per_sec": 0, 00:11:13.186 "r_mbytes_per_sec": 0, 00:11:13.186 "w_mbytes_per_sec": 0 00:11:13.186 }, 00:11:13.186 "claimed": true, 00:11:13.186 "claim_type": "exclusive_write", 00:11:13.186 "zoned": false, 00:11:13.186 "supported_io_types": { 00:11:13.186 "read": true, 00:11:13.186 "write": true, 00:11:13.186 "unmap": true, 00:11:13.186 "flush": true, 00:11:13.186 "reset": true, 00:11:13.186 "nvme_admin": false, 00:11:13.186 "nvme_io": false, 00:11:13.186 "nvme_io_md": false, 00:11:13.186 "write_zeroes": true, 00:11:13.186 "zcopy": true, 00:11:13.186 "get_zone_info": false, 00:11:13.186 "zone_management": false, 00:11:13.186 "zone_append": false, 00:11:13.186 "compare": false, 00:11:13.186 "compare_and_write": false, 00:11:13.186 "abort": true, 00:11:13.186 "seek_hole": false, 00:11:13.186 "seek_data": false, 00:11:13.186 "copy": true, 00:11:13.186 "nvme_iov_md": false 00:11:13.186 }, 00:11:13.186 "memory_domains": [ 00:11:13.186 { 00:11:13.186 "dma_device_id": "system", 00:11:13.186 "dma_device_type": 1 00:11:13.186 }, 00:11:13.186 { 00:11:13.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.186 "dma_device_type": 2 00:11:13.186 } 00:11:13.186 ], 00:11:13.186 "driver_specific": {} 00:11:13.186 } 00:11:13.186 ] 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:13.186 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.187 "name": "Existed_Raid", 00:11:13.187 "uuid": "9432ad7b-5e68-40d9-a0af-382575b63088", 00:11:13.187 "strip_size_kb": 0, 00:11:13.187 "state": "online", 00:11:13.187 "raid_level": "raid1", 00:11:13.187 "superblock": true, 00:11:13.187 "num_base_bdevs": 2, 00:11:13.187 "num_base_bdevs_discovered": 2, 00:11:13.187 "num_base_bdevs_operational": 2, 00:11:13.187 "base_bdevs_list": [ 00:11:13.187 { 00:11:13.187 "name": "BaseBdev1", 00:11:13.187 "uuid": "88ebd631-ff03-4825-b45d-cb79e586654a", 00:11:13.187 "is_configured": true, 00:11:13.187 "data_offset": 2048, 00:11:13.187 "data_size": 63488 00:11:13.187 }, 00:11:13.187 { 00:11:13.187 "name": "BaseBdev2", 00:11:13.187 "uuid": "a3c65ecc-3ec4-4f4a-90f2-48a98d8505a3", 00:11:13.187 "is_configured": true, 00:11:13.187 "data_offset": 2048, 00:11:13.187 "data_size": 63488 00:11:13.187 } 00:11:13.187 ] 00:11:13.187 }' 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.187 20:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.752 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.752 20:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.752 [2024-11-26 20:24:07.012914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.752 "name": "Existed_Raid", 00:11:13.752 "aliases": [ 00:11:13.752 "9432ad7b-5e68-40d9-a0af-382575b63088" 00:11:13.752 ], 00:11:13.752 "product_name": "Raid Volume", 00:11:13.752 "block_size": 512, 00:11:13.752 "num_blocks": 63488, 00:11:13.752 "uuid": "9432ad7b-5e68-40d9-a0af-382575b63088", 00:11:13.752 "assigned_rate_limits": { 00:11:13.752 "rw_ios_per_sec": 0, 00:11:13.752 "rw_mbytes_per_sec": 0, 00:11:13.752 "r_mbytes_per_sec": 0, 00:11:13.752 "w_mbytes_per_sec": 0 00:11:13.752 }, 00:11:13.752 "claimed": false, 00:11:13.752 "zoned": false, 00:11:13.752 "supported_io_types": { 00:11:13.752 "read": true, 00:11:13.752 "write": true, 00:11:13.752 "unmap": false, 00:11:13.752 "flush": false, 00:11:13.752 "reset": true, 00:11:13.752 "nvme_admin": false, 00:11:13.752 "nvme_io": false, 00:11:13.752 "nvme_io_md": false, 00:11:13.752 "write_zeroes": true, 00:11:13.752 "zcopy": false, 00:11:13.752 "get_zone_info": false, 00:11:13.752 "zone_management": false, 00:11:13.752 "zone_append": false, 00:11:13.752 "compare": false, 00:11:13.752 "compare_and_write": false, 00:11:13.752 "abort": false, 00:11:13.752 "seek_hole": false, 00:11:13.752 "seek_data": false, 00:11:13.752 "copy": false, 00:11:13.752 "nvme_iov_md": false 00:11:13.752 }, 00:11:13.752 "memory_domains": [ 00:11:13.752 { 00:11:13.752 "dma_device_id": "system", 00:11:13.752 "dma_device_type": 1 00:11:13.752 }, 00:11:13.752 { 00:11:13.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.752 "dma_device_type": 2 00:11:13.752 }, 00:11:13.752 { 00:11:13.752 "dma_device_id": "system", 00:11:13.752 "dma_device_type": 1 00:11:13.752 }, 00:11:13.752 { 00:11:13.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.752 "dma_device_type": 2 00:11:13.752 } 00:11:13.752 ], 00:11:13.752 "driver_specific": { 00:11:13.752 "raid": { 00:11:13.752 "uuid": "9432ad7b-5e68-40d9-a0af-382575b63088", 00:11:13.752 "strip_size_kb": 0, 00:11:13.752 "state": "online", 00:11:13.752 "raid_level": "raid1", 00:11:13.752 "superblock": true, 00:11:13.752 "num_base_bdevs": 2, 00:11:13.752 "num_base_bdevs_discovered": 2, 00:11:13.752 "num_base_bdevs_operational": 2, 00:11:13.752 "base_bdevs_list": [ 00:11:13.752 { 00:11:13.752 "name": "BaseBdev1", 00:11:13.752 "uuid": "88ebd631-ff03-4825-b45d-cb79e586654a", 00:11:13.752 "is_configured": true, 00:11:13.752 "data_offset": 2048, 00:11:13.752 "data_size": 63488 00:11:13.752 }, 00:11:13.752 { 00:11:13.752 "name": "BaseBdev2", 00:11:13.752 "uuid": "a3c65ecc-3ec4-4f4a-90f2-48a98d8505a3", 00:11:13.752 "is_configured": true, 00:11:13.752 "data_offset": 2048, 00:11:13.752 "data_size": 63488 00:11:13.752 } 00:11:13.752 ] 00:11:13.752 } 00:11:13.752 } 00:11:13.752 }' 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.752 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.752 BaseBdev2' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.753 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.753 [2024-11-26 20:24:07.248350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.010 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.010 "name": "Existed_Raid", 00:11:14.010 "uuid": "9432ad7b-5e68-40d9-a0af-382575b63088", 00:11:14.010 "strip_size_kb": 0, 00:11:14.010 "state": "online", 00:11:14.010 "raid_level": "raid1", 00:11:14.011 "superblock": true, 00:11:14.011 "num_base_bdevs": 2, 00:11:14.011 "num_base_bdevs_discovered": 1, 00:11:14.011 "num_base_bdevs_operational": 1, 00:11:14.011 "base_bdevs_list": [ 00:11:14.011 { 00:11:14.011 "name": null, 00:11:14.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.011 "is_configured": false, 00:11:14.011 "data_offset": 0, 00:11:14.011 "data_size": 63488 00:11:14.011 }, 00:11:14.011 { 00:11:14.011 "name": "BaseBdev2", 00:11:14.011 "uuid": "a3c65ecc-3ec4-4f4a-90f2-48a98d8505a3", 00:11:14.011 "is_configured": true, 00:11:14.011 "data_offset": 2048, 00:11:14.011 "data_size": 63488 00:11:14.011 } 00:11:14.011 ] 00:11:14.011 }' 00:11:14.011 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.011 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.268 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.268 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.525 [2024-11-26 20:24:07.877040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.525 [2024-11-26 20:24:07.877214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.525 [2024-11-26 20:24:07.996542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.525 [2024-11-26 20:24:07.996619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.525 [2024-11-26 20:24:07.996635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.525 20:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63215 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63215 ']' 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63215 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.525 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63215 00:11:14.783 killing process with pid 63215 00:11:14.783 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.783 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.783 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63215' 00:11:14.783 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63215 00:11:14.783 [2024-11-26 20:24:08.088060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.783 20:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63215 00:11:14.783 [2024-11-26 20:24:08.109214] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.205 20:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:16.205 00:11:16.205 real 0m5.700s 00:11:16.205 user 0m8.186s 00:11:16.205 sys 0m0.898s 00:11:16.205 20:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.205 20:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.205 ************************************ 00:11:16.205 END TEST raid_state_function_test_sb 00:11:16.205 ************************************ 00:11:16.205 20:24:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:16.205 20:24:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.205 20:24:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.205 20:24:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.205 ************************************ 00:11:16.205 START TEST raid_superblock_test 00:11:16.205 ************************************ 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63472 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63472 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63472 ']' 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.205 20:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.205 [2024-11-26 20:24:09.651864] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:16.205 [2024-11-26 20:24:09.652098] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63472 ] 00:11:16.464 [2024-11-26 20:24:09.817165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.464 [2024-11-26 20:24:09.954837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.723 [2024-11-26 20:24:10.201921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.723 [2024-11-26 20:24:10.202100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.291 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 malloc1 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 [2024-11-26 20:24:10.642345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.292 [2024-11-26 20:24:10.642415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.292 [2024-11-26 20:24:10.642442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.292 [2024-11-26 20:24:10.642454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.292 [2024-11-26 20:24:10.644963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.292 [2024-11-26 20:24:10.645068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.292 pt1 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 malloc2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 [2024-11-26 20:24:10.701086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.292 [2024-11-26 20:24:10.701215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.292 [2024-11-26 20:24:10.701292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.292 [2024-11-26 20:24:10.701331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.292 [2024-11-26 20:24:10.703858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.292 [2024-11-26 20:24:10.703944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.292 pt2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 [2024-11-26 20:24:10.713124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.292 [2024-11-26 20:24:10.715315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.292 [2024-11-26 20:24:10.715595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.292 [2024-11-26 20:24:10.715662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.292 [2024-11-26 20:24:10.716050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:17.292 [2024-11-26 20:24:10.716295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.292 [2024-11-26 20:24:10.716351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.292 [2024-11-26 20:24:10.716611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.292 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.292 "name": "raid_bdev1", 00:11:17.292 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:17.292 "strip_size_kb": 0, 00:11:17.292 "state": "online", 00:11:17.292 "raid_level": "raid1", 00:11:17.292 "superblock": true, 00:11:17.292 "num_base_bdevs": 2, 00:11:17.292 "num_base_bdevs_discovered": 2, 00:11:17.292 "num_base_bdevs_operational": 2, 00:11:17.292 "base_bdevs_list": [ 00:11:17.292 { 00:11:17.292 "name": "pt1", 00:11:17.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.292 "is_configured": true, 00:11:17.292 "data_offset": 2048, 00:11:17.292 "data_size": 63488 00:11:17.292 }, 00:11:17.292 { 00:11:17.292 "name": "pt2", 00:11:17.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.293 "is_configured": true, 00:11:17.293 "data_offset": 2048, 00:11:17.293 "data_size": 63488 00:11:17.293 } 00:11:17.293 ] 00:11:17.293 }' 00:11:17.293 20:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.293 20:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.860 [2024-11-26 20:24:11.208811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.860 "name": "raid_bdev1", 00:11:17.860 "aliases": [ 00:11:17.860 "3ba4966f-38d5-4982-a743-6bc8b1285213" 00:11:17.860 ], 00:11:17.860 "product_name": "Raid Volume", 00:11:17.860 "block_size": 512, 00:11:17.860 "num_blocks": 63488, 00:11:17.860 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:17.860 "assigned_rate_limits": { 00:11:17.860 "rw_ios_per_sec": 0, 00:11:17.860 "rw_mbytes_per_sec": 0, 00:11:17.860 "r_mbytes_per_sec": 0, 00:11:17.860 "w_mbytes_per_sec": 0 00:11:17.860 }, 00:11:17.860 "claimed": false, 00:11:17.860 "zoned": false, 00:11:17.860 "supported_io_types": { 00:11:17.860 "read": true, 00:11:17.860 "write": true, 00:11:17.860 "unmap": false, 00:11:17.860 "flush": false, 00:11:17.860 "reset": true, 00:11:17.860 "nvme_admin": false, 00:11:17.860 "nvme_io": false, 00:11:17.860 "nvme_io_md": false, 00:11:17.860 "write_zeroes": true, 00:11:17.860 "zcopy": false, 00:11:17.860 "get_zone_info": false, 00:11:17.860 "zone_management": false, 00:11:17.860 "zone_append": false, 00:11:17.860 "compare": false, 00:11:17.860 "compare_and_write": false, 00:11:17.860 "abort": false, 00:11:17.860 "seek_hole": false, 00:11:17.860 "seek_data": false, 00:11:17.860 "copy": false, 00:11:17.860 "nvme_iov_md": false 00:11:17.860 }, 00:11:17.860 "memory_domains": [ 00:11:17.860 { 00:11:17.860 "dma_device_id": "system", 00:11:17.860 "dma_device_type": 1 00:11:17.860 }, 00:11:17.860 { 00:11:17.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.860 "dma_device_type": 2 00:11:17.860 }, 00:11:17.860 { 00:11:17.860 "dma_device_id": "system", 00:11:17.860 "dma_device_type": 1 00:11:17.860 }, 00:11:17.860 { 00:11:17.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.860 "dma_device_type": 2 00:11:17.860 } 00:11:17.860 ], 00:11:17.860 "driver_specific": { 00:11:17.860 "raid": { 00:11:17.860 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:17.860 "strip_size_kb": 0, 00:11:17.860 "state": "online", 00:11:17.860 "raid_level": "raid1", 00:11:17.860 "superblock": true, 00:11:17.860 "num_base_bdevs": 2, 00:11:17.860 "num_base_bdevs_discovered": 2, 00:11:17.860 "num_base_bdevs_operational": 2, 00:11:17.860 "base_bdevs_list": [ 00:11:17.860 { 00:11:17.860 "name": "pt1", 00:11:17.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.860 "is_configured": true, 00:11:17.860 "data_offset": 2048, 00:11:17.860 "data_size": 63488 00:11:17.860 }, 00:11:17.860 { 00:11:17.860 "name": "pt2", 00:11:17.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.860 "is_configured": true, 00:11:17.860 "data_offset": 2048, 00:11:17.860 "data_size": 63488 00:11:17.860 } 00:11:17.860 ] 00:11:17.860 } 00:11:17.860 } 00:11:17.860 }' 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.860 pt2' 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.860 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.861 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.120 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.120 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.120 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:18.120 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.120 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 [2024-11-26 20:24:11.428632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3ba4966f-38d5-4982-a743-6bc8b1285213 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3ba4966f-38d5-4982-a743-6bc8b1285213 ']' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 [2024-11-26 20:24:11.472173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.121 [2024-11-26 20:24:11.472294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.121 [2024-11-26 20:24:11.472442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.121 [2024-11-26 20:24:11.472526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.121 [2024-11-26 20:24:11.472548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 [2024-11-26 20:24:11.612020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:18.121 [2024-11-26 20:24:11.614274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:18.121 [2024-11-26 20:24:11.614407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:18.121 [2024-11-26 20:24:11.614530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:18.121 [2024-11-26 20:24:11.614592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.121 [2024-11-26 20:24:11.614630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:18.121 request: 00:11:18.121 { 00:11:18.121 "name": "raid_bdev1", 00:11:18.121 "raid_level": "raid1", 00:11:18.121 "base_bdevs": [ 00:11:18.121 "malloc1", 00:11:18.121 "malloc2" 00:11:18.121 ], 00:11:18.121 "superblock": false, 00:11:18.121 "method": "bdev_raid_create", 00:11:18.121 "req_id": 1 00:11:18.121 } 00:11:18.121 Got JSON-RPC error response 00:11:18.121 response: 00:11:18.121 { 00:11:18.121 "code": -17, 00:11:18.121 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:18.121 } 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.121 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.121 [2024-11-26 20:24:11.671870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.121 [2024-11-26 20:24:11.671952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.121 [2024-11-26 20:24:11.671977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:18.121 [2024-11-26 20:24:11.671992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.381 [2024-11-26 20:24:11.674584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.381 [2024-11-26 20:24:11.674630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.381 [2024-11-26 20:24:11.674738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.381 [2024-11-26 20:24:11.674808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.381 pt1 00:11:18.381 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.382 "name": "raid_bdev1", 00:11:18.382 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:18.382 "strip_size_kb": 0, 00:11:18.382 "state": "configuring", 00:11:18.382 "raid_level": "raid1", 00:11:18.382 "superblock": true, 00:11:18.382 "num_base_bdevs": 2, 00:11:18.382 "num_base_bdevs_discovered": 1, 00:11:18.382 "num_base_bdevs_operational": 2, 00:11:18.382 "base_bdevs_list": [ 00:11:18.382 { 00:11:18.382 "name": "pt1", 00:11:18.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.382 "is_configured": true, 00:11:18.382 "data_offset": 2048, 00:11:18.382 "data_size": 63488 00:11:18.382 }, 00:11:18.382 { 00:11:18.382 "name": null, 00:11:18.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.382 "is_configured": false, 00:11:18.382 "data_offset": 2048, 00:11:18.382 "data_size": 63488 00:11:18.382 } 00:11:18.382 ] 00:11:18.382 }' 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.382 20:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.641 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:18.641 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:18.641 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.641 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.641 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.641 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.641 [2024-11-26 20:24:12.087452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.641 [2024-11-26 20:24:12.087607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.641 [2024-11-26 20:24:12.087666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:18.641 [2024-11-26 20:24:12.087709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.641 [2024-11-26 20:24:12.088323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.641 [2024-11-26 20:24:12.088409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.641 [2024-11-26 20:24:12.088561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.641 [2024-11-26 20:24:12.088642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.641 [2024-11-26 20:24:12.088832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.641 [2024-11-26 20:24:12.088902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.642 [2024-11-26 20:24:12.089261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:18.642 [2024-11-26 20:24:12.089488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.642 [2024-11-26 20:24:12.089533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:18.642 [2024-11-26 20:24:12.089778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.642 pt2 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.642 "name": "raid_bdev1", 00:11:18.642 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:18.642 "strip_size_kb": 0, 00:11:18.642 "state": "online", 00:11:18.642 "raid_level": "raid1", 00:11:18.642 "superblock": true, 00:11:18.642 "num_base_bdevs": 2, 00:11:18.642 "num_base_bdevs_discovered": 2, 00:11:18.642 "num_base_bdevs_operational": 2, 00:11:18.642 "base_bdevs_list": [ 00:11:18.642 { 00:11:18.642 "name": "pt1", 00:11:18.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.642 "is_configured": true, 00:11:18.642 "data_offset": 2048, 00:11:18.642 "data_size": 63488 00:11:18.642 }, 00:11:18.642 { 00:11:18.642 "name": "pt2", 00:11:18.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.642 "is_configured": true, 00:11:18.642 "data_offset": 2048, 00:11:18.642 "data_size": 63488 00:11:18.642 } 00:11:18.642 ] 00:11:18.642 }' 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.642 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.210 [2024-11-26 20:24:12.571654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.210 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.210 "name": "raid_bdev1", 00:11:19.210 "aliases": [ 00:11:19.210 "3ba4966f-38d5-4982-a743-6bc8b1285213" 00:11:19.210 ], 00:11:19.210 "product_name": "Raid Volume", 00:11:19.211 "block_size": 512, 00:11:19.211 "num_blocks": 63488, 00:11:19.211 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:19.211 "assigned_rate_limits": { 00:11:19.211 "rw_ios_per_sec": 0, 00:11:19.211 "rw_mbytes_per_sec": 0, 00:11:19.211 "r_mbytes_per_sec": 0, 00:11:19.211 "w_mbytes_per_sec": 0 00:11:19.211 }, 00:11:19.211 "claimed": false, 00:11:19.211 "zoned": false, 00:11:19.211 "supported_io_types": { 00:11:19.211 "read": true, 00:11:19.211 "write": true, 00:11:19.211 "unmap": false, 00:11:19.211 "flush": false, 00:11:19.211 "reset": true, 00:11:19.211 "nvme_admin": false, 00:11:19.211 "nvme_io": false, 00:11:19.211 "nvme_io_md": false, 00:11:19.211 "write_zeroes": true, 00:11:19.211 "zcopy": false, 00:11:19.211 "get_zone_info": false, 00:11:19.211 "zone_management": false, 00:11:19.211 "zone_append": false, 00:11:19.211 "compare": false, 00:11:19.211 "compare_and_write": false, 00:11:19.211 "abort": false, 00:11:19.211 "seek_hole": false, 00:11:19.211 "seek_data": false, 00:11:19.211 "copy": false, 00:11:19.211 "nvme_iov_md": false 00:11:19.211 }, 00:11:19.211 "memory_domains": [ 00:11:19.211 { 00:11:19.211 "dma_device_id": "system", 00:11:19.211 "dma_device_type": 1 00:11:19.211 }, 00:11:19.211 { 00:11:19.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.211 "dma_device_type": 2 00:11:19.211 }, 00:11:19.211 { 00:11:19.211 "dma_device_id": "system", 00:11:19.211 "dma_device_type": 1 00:11:19.211 }, 00:11:19.211 { 00:11:19.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.211 "dma_device_type": 2 00:11:19.211 } 00:11:19.211 ], 00:11:19.211 "driver_specific": { 00:11:19.211 "raid": { 00:11:19.211 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:19.211 "strip_size_kb": 0, 00:11:19.211 "state": "online", 00:11:19.211 "raid_level": "raid1", 00:11:19.211 "superblock": true, 00:11:19.211 "num_base_bdevs": 2, 00:11:19.211 "num_base_bdevs_discovered": 2, 00:11:19.211 "num_base_bdevs_operational": 2, 00:11:19.211 "base_bdevs_list": [ 00:11:19.211 { 00:11:19.211 "name": "pt1", 00:11:19.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.211 "is_configured": true, 00:11:19.211 "data_offset": 2048, 00:11:19.211 "data_size": 63488 00:11:19.211 }, 00:11:19.211 { 00:11:19.211 "name": "pt2", 00:11:19.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.211 "is_configured": true, 00:11:19.211 "data_offset": 2048, 00:11:19.211 "data_size": 63488 00:11:19.211 } 00:11:19.211 ] 00:11:19.211 } 00:11:19.211 } 00:11:19.211 }' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:19.211 pt2' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.211 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 [2024-11-26 20:24:12.819231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3ba4966f-38d5-4982-a743-6bc8b1285213 '!=' 3ba4966f-38d5-4982-a743-6bc8b1285213 ']' 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 [2024-11-26 20:24:12.862934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.470 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.470 "name": "raid_bdev1", 00:11:19.470 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:19.470 "strip_size_kb": 0, 00:11:19.470 "state": "online", 00:11:19.470 "raid_level": "raid1", 00:11:19.470 "superblock": true, 00:11:19.470 "num_base_bdevs": 2, 00:11:19.470 "num_base_bdevs_discovered": 1, 00:11:19.470 "num_base_bdevs_operational": 1, 00:11:19.470 "base_bdevs_list": [ 00:11:19.470 { 00:11:19.470 "name": null, 00:11:19.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.470 "is_configured": false, 00:11:19.470 "data_offset": 0, 00:11:19.470 "data_size": 63488 00:11:19.470 }, 00:11:19.470 { 00:11:19.470 "name": "pt2", 00:11:19.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.470 "is_configured": true, 00:11:19.470 "data_offset": 2048, 00:11:19.470 "data_size": 63488 00:11:19.470 } 00:11:19.470 ] 00:11:19.471 }' 00:11:19.471 20:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.471 20:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 [2024-11-26 20:24:13.378021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.039 [2024-11-26 20:24:13.378120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.039 [2024-11-26 20:24:13.378255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.039 [2024-11-26 20:24:13.378345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.039 [2024-11-26 20:24:13.378403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 [2024-11-26 20:24:13.449905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.039 [2024-11-26 20:24:13.450052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.039 [2024-11-26 20:24:13.450101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:20.039 [2024-11-26 20:24:13.450136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.039 [2024-11-26 20:24:13.452777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.039 [2024-11-26 20:24:13.452894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.039 [2024-11-26 20:24:13.453042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:20.039 [2024-11-26 20:24:13.453140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.039 [2024-11-26 20:24:13.453354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:20.039 [2024-11-26 20:24:13.453411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.039 [2024-11-26 20:24:13.453730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:20.039 [2024-11-26 20:24:13.453960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:20.039 [2024-11-26 20:24:13.454009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:20.039 [2024-11-26 20:24:13.454287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.039 pt2 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.039 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.040 "name": "raid_bdev1", 00:11:20.040 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:20.040 "strip_size_kb": 0, 00:11:20.040 "state": "online", 00:11:20.040 "raid_level": "raid1", 00:11:20.040 "superblock": true, 00:11:20.040 "num_base_bdevs": 2, 00:11:20.040 "num_base_bdevs_discovered": 1, 00:11:20.040 "num_base_bdevs_operational": 1, 00:11:20.040 "base_bdevs_list": [ 00:11:20.040 { 00:11:20.040 "name": null, 00:11:20.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.040 "is_configured": false, 00:11:20.040 "data_offset": 2048, 00:11:20.040 "data_size": 63488 00:11:20.040 }, 00:11:20.040 { 00:11:20.040 "name": "pt2", 00:11:20.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.040 "is_configured": true, 00:11:20.040 "data_offset": 2048, 00:11:20.040 "data_size": 63488 00:11:20.040 } 00:11:20.040 ] 00:11:20.040 }' 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.040 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.609 [2024-11-26 20:24:13.909458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.609 [2024-11-26 20:24:13.909516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.609 [2024-11-26 20:24:13.909623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.609 [2024-11-26 20:24:13.909703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.609 [2024-11-26 20:24:13.909719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.609 [2024-11-26 20:24:13.973452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.609 [2024-11-26 20:24:13.973538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.609 [2024-11-26 20:24:13.973563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:20.609 [2024-11-26 20:24:13.973575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.609 [2024-11-26 20:24:13.976351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.609 [2024-11-26 20:24:13.976402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.609 [2024-11-26 20:24:13.976522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:20.609 [2024-11-26 20:24:13.976584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.609 [2024-11-26 20:24:13.976789] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:20.609 [2024-11-26 20:24:13.976810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.609 [2024-11-26 20:24:13.976838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:20.609 [2024-11-26 20:24:13.976922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.609 [2024-11-26 20:24:13.977014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:20.609 [2024-11-26 20:24:13.977034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.609 [2024-11-26 20:24:13.977378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:20.609 [2024-11-26 20:24:13.977563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:20.609 [2024-11-26 20:24:13.977580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:20.609 [2024-11-26 20:24:13.977830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.609 pt1 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.609 20:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.609 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.609 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.609 "name": "raid_bdev1", 00:11:20.609 "uuid": "3ba4966f-38d5-4982-a743-6bc8b1285213", 00:11:20.609 "strip_size_kb": 0, 00:11:20.609 "state": "online", 00:11:20.609 "raid_level": "raid1", 00:11:20.609 "superblock": true, 00:11:20.609 "num_base_bdevs": 2, 00:11:20.609 "num_base_bdevs_discovered": 1, 00:11:20.609 "num_base_bdevs_operational": 1, 00:11:20.609 "base_bdevs_list": [ 00:11:20.609 { 00:11:20.609 "name": null, 00:11:20.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.609 "is_configured": false, 00:11:20.609 "data_offset": 2048, 00:11:20.609 "data_size": 63488 00:11:20.609 }, 00:11:20.609 { 00:11:20.609 "name": "pt2", 00:11:20.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.609 "is_configured": true, 00:11:20.609 "data_offset": 2048, 00:11:20.609 "data_size": 63488 00:11:20.609 } 00:11:20.609 ] 00:11:20.609 }' 00:11:20.609 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.609 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.178 [2024-11-26 20:24:14.485722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3ba4966f-38d5-4982-a743-6bc8b1285213 '!=' 3ba4966f-38d5-4982-a743-6bc8b1285213 ']' 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63472 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63472 ']' 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63472 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63472 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63472' 00:11:21.178 killing process with pid 63472 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63472 00:11:21.178 [2024-11-26 20:24:14.569806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.178 [2024-11-26 20:24:14.569936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.178 20:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63472 00:11:21.178 [2024-11-26 20:24:14.569993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.178 [2024-11-26 20:24:14.570010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:21.437 [2024-11-26 20:24:14.819399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.868 20:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:22.868 00:11:22.868 real 0m6.632s 00:11:22.868 user 0m9.938s 00:11:22.868 sys 0m1.107s 00:11:22.868 ************************************ 00:11:22.868 END TEST raid_superblock_test 00:11:22.868 ************************************ 00:11:22.868 20:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.868 20:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 20:24:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:22.868 20:24:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.868 20:24:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.868 20:24:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 ************************************ 00:11:22.868 START TEST raid_read_error_test 00:11:22.868 ************************************ 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qS6uo4bpAI 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63808 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63808 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63808 ']' 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.868 20:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 [2024-11-26 20:24:16.375267] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:22.868 [2024-11-26 20:24:16.375536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63808 ] 00:11:23.131 [2024-11-26 20:24:16.556633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.390 [2024-11-26 20:24:16.692138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.390 [2024-11-26 20:24:16.942069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.390 [2024-11-26 20:24:16.942150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 BaseBdev1_malloc 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 true 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 [2024-11-26 20:24:17.386696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.958 [2024-11-26 20:24:17.386873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.958 [2024-11-26 20:24:17.386910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:23.958 [2024-11-26 20:24:17.386926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.958 [2024-11-26 20:24:17.389559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.958 [2024-11-26 20:24:17.389611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.958 BaseBdev1 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 BaseBdev2_malloc 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 true 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 [2024-11-26 20:24:17.459379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.958 [2024-11-26 20:24:17.459452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.958 [2024-11-26 20:24:17.459474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.958 [2024-11-26 20:24:17.459488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.958 [2024-11-26 20:24:17.461964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.958 [2024-11-26 20:24:17.462014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.958 BaseBdev2 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.958 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 [2024-11-26 20:24:17.471454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.958 [2024-11-26 20:24:17.473802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.959 [2024-11-26 20:24:17.474093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:23.959 [2024-11-26 20:24:17.474155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:23.959 [2024-11-26 20:24:17.474496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:23.959 [2024-11-26 20:24:17.474761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:23.959 [2024-11-26 20:24:17.474813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:23.959 [2024-11-26 20:24:17.475058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.959 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.218 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.218 "name": "raid_bdev1", 00:11:24.218 "uuid": "4bf87399-c5f3-4a21-8007-6aad879af135", 00:11:24.218 "strip_size_kb": 0, 00:11:24.218 "state": "online", 00:11:24.218 "raid_level": "raid1", 00:11:24.218 "superblock": true, 00:11:24.218 "num_base_bdevs": 2, 00:11:24.218 "num_base_bdevs_discovered": 2, 00:11:24.218 "num_base_bdevs_operational": 2, 00:11:24.218 "base_bdevs_list": [ 00:11:24.218 { 00:11:24.218 "name": "BaseBdev1", 00:11:24.218 "uuid": "1c4e3602-a954-5e75-9f0f-b3d89ad302c9", 00:11:24.218 "is_configured": true, 00:11:24.218 "data_offset": 2048, 00:11:24.218 "data_size": 63488 00:11:24.218 }, 00:11:24.218 { 00:11:24.218 "name": "BaseBdev2", 00:11:24.218 "uuid": "02a139d6-3c19-55a0-b050-4c3144e88228", 00:11:24.218 "is_configured": true, 00:11:24.218 "data_offset": 2048, 00:11:24.218 "data_size": 63488 00:11:24.218 } 00:11:24.218 ] 00:11:24.218 }' 00:11:24.218 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.218 20:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.477 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.477 20:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.734 [2024-11-26 20:24:18.084114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.669 20:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.669 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.669 20:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.669 "name": "raid_bdev1", 00:11:25.669 "uuid": "4bf87399-c5f3-4a21-8007-6aad879af135", 00:11:25.669 "strip_size_kb": 0, 00:11:25.669 "state": "online", 00:11:25.669 "raid_level": "raid1", 00:11:25.669 "superblock": true, 00:11:25.669 "num_base_bdevs": 2, 00:11:25.669 "num_base_bdevs_discovered": 2, 00:11:25.669 "num_base_bdevs_operational": 2, 00:11:25.669 "base_bdevs_list": [ 00:11:25.669 { 00:11:25.669 "name": "BaseBdev1", 00:11:25.669 "uuid": "1c4e3602-a954-5e75-9f0f-b3d89ad302c9", 00:11:25.669 "is_configured": true, 00:11:25.669 "data_offset": 2048, 00:11:25.669 "data_size": 63488 00:11:25.669 }, 00:11:25.669 { 00:11:25.669 "name": "BaseBdev2", 00:11:25.669 "uuid": "02a139d6-3c19-55a0-b050-4c3144e88228", 00:11:25.669 "is_configured": true, 00:11:25.669 "data_offset": 2048, 00:11:25.669 "data_size": 63488 00:11:25.669 } 00:11:25.669 ] 00:11:25.669 }' 00:11:25.669 20:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.669 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.927 20:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.927 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.927 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.185 [2024-11-26 20:24:19.486092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.185 [2024-11-26 20:24:19.486231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.185 [2024-11-26 20:24:19.489467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.185 [2024-11-26 20:24:19.489594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.185 [2024-11-26 20:24:19.489734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.185 [2024-11-26 20:24:19.489806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:11:26.185 "results": [ 00:11:26.185 { 00:11:26.185 "job": "raid_bdev1", 00:11:26.185 "core_mask": "0x1", 00:11:26.185 "workload": "randrw", 00:11:26.185 "percentage": 50, 00:11:26.185 "status": "finished", 00:11:26.185 "queue_depth": 1, 00:11:26.185 "io_size": 131072, 00:11:26.185 "runtime": 1.402545, 00:11:26.185 "iops": 14454.438182019116, 00:11:26.185 "mibps": 1806.8047727523895, 00:11:26.185 "io_failed": 0, 00:11:26.185 "io_timeout": 0, 00:11:26.185 "avg_latency_us": 65.83435115046429, 00:11:26.185 "min_latency_us": 30.183406113537117, 00:11:26.185 "max_latency_us": 1788.646288209607 00:11:26.185 } 00:11:26.185 ], 00:11:26.185 "core_count": 1 00:11:26.185 } 00:11:26.185 te offline 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63808 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63808 ']' 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63808 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63808 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63808' 00:11:26.185 killing process with pid 63808 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63808 00:11:26.185 [2024-11-26 20:24:19.536232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.185 20:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63808 00:11:26.185 [2024-11-26 20:24:19.704599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qS6uo4bpAI 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:28.087 ************************************ 00:11:28.087 END TEST raid_read_error_test 00:11:28.087 ************************************ 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:28.087 00:11:28.087 real 0m4.891s 00:11:28.087 user 0m5.909s 00:11:28.087 sys 0m0.630s 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.087 20:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 20:24:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:28.087 20:24:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:28.087 20:24:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.087 20:24:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 ************************************ 00:11:28.087 START TEST raid_write_error_test 00:11:28.087 ************************************ 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PCETjjddT0 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63954 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63954 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63954 ']' 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.087 20:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 [2024-11-26 20:24:21.342844] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:28.087 [2024-11-26 20:24:21.343023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63954 ] 00:11:28.087 [2024-11-26 20:24:21.514269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.355 [2024-11-26 20:24:21.648337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.355 [2024-11-26 20:24:21.890639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.355 [2024-11-26 20:24:21.890717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 BaseBdev1_malloc 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 true 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 [2024-11-26 20:24:22.321534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:28.935 [2024-11-26 20:24:22.321686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.935 [2024-11-26 20:24:22.321748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:28.935 [2024-11-26 20:24:22.321812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.935 [2024-11-26 20:24:22.324496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.935 [2024-11-26 20:24:22.324597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:28.935 BaseBdev1 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 BaseBdev2_malloc 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 true 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 [2024-11-26 20:24:22.390259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:28.935 [2024-11-26 20:24:22.390321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.935 [2024-11-26 20:24:22.390339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:28.935 [2024-11-26 20:24:22.390352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.935 [2024-11-26 20:24:22.392756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.935 [2024-11-26 20:24:22.392868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:28.935 BaseBdev2 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.935 [2024-11-26 20:24:22.402311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.935 [2024-11-26 20:24:22.404426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.935 [2024-11-26 20:24:22.404669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.935 [2024-11-26 20:24:22.404687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.935 [2024-11-26 20:24:22.404964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:28.935 [2024-11-26 20:24:22.405161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.935 [2024-11-26 20:24:22.405174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:28.935 [2024-11-26 20:24:22.405393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.935 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.936 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.936 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.936 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.936 "name": "raid_bdev1", 00:11:28.936 "uuid": "7639d85e-8930-4960-adbd-1c5f31171c90", 00:11:28.936 "strip_size_kb": 0, 00:11:28.936 "state": "online", 00:11:28.936 "raid_level": "raid1", 00:11:28.936 "superblock": true, 00:11:28.936 "num_base_bdevs": 2, 00:11:28.936 "num_base_bdevs_discovered": 2, 00:11:28.936 "num_base_bdevs_operational": 2, 00:11:28.936 "base_bdevs_list": [ 00:11:28.936 { 00:11:28.936 "name": "BaseBdev1", 00:11:28.936 "uuid": "289cb43b-56ff-53a9-a448-f3bb5cac70bb", 00:11:28.936 "is_configured": true, 00:11:28.936 "data_offset": 2048, 00:11:28.936 "data_size": 63488 00:11:28.936 }, 00:11:28.936 { 00:11:28.936 "name": "BaseBdev2", 00:11:28.936 "uuid": "f31b0715-d8d3-547e-ba63-76dd446238ae", 00:11:28.936 "is_configured": true, 00:11:28.936 "data_offset": 2048, 00:11:28.936 "data_size": 63488 00:11:28.936 } 00:11:28.936 ] 00:11:28.936 }' 00:11:28.936 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.936 20:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.503 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:29.503 20:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:29.503 [2024-11-26 20:24:22.975100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.442 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:30.442 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.442 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.442 [2024-11-26 20:24:23.868059] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:30.442 [2024-11-26 20:24:23.868217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.442 [2024-11-26 20:24:23.868525] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:30.442 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.442 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:30.442 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.443 "name": "raid_bdev1", 00:11:30.443 "uuid": "7639d85e-8930-4960-adbd-1c5f31171c90", 00:11:30.443 "strip_size_kb": 0, 00:11:30.443 "state": "online", 00:11:30.443 "raid_level": "raid1", 00:11:30.443 "superblock": true, 00:11:30.443 "num_base_bdevs": 2, 00:11:30.443 "num_base_bdevs_discovered": 1, 00:11:30.443 "num_base_bdevs_operational": 1, 00:11:30.443 "base_bdevs_list": [ 00:11:30.443 { 00:11:30.443 "name": null, 00:11:30.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.443 "is_configured": false, 00:11:30.443 "data_offset": 0, 00:11:30.443 "data_size": 63488 00:11:30.443 }, 00:11:30.443 { 00:11:30.443 "name": "BaseBdev2", 00:11:30.443 "uuid": "f31b0715-d8d3-547e-ba63-76dd446238ae", 00:11:30.443 "is_configured": true, 00:11:30.443 "data_offset": 2048, 00:11:30.443 "data_size": 63488 00:11:30.443 } 00:11:30.443 ] 00:11:30.443 }' 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.443 20:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.040 [2024-11-26 20:24:24.350549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:31.040 [2024-11-26 20:24:24.350586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.040 [2024-11-26 20:24:24.353809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.040 [2024-11-26 20:24:24.353861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.040 [2024-11-26 20:24:24.353923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.040 [2024-11-26 20:24:24.353933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:31.040 { 00:11:31.040 "results": [ 00:11:31.040 { 00:11:31.040 "job": "raid_bdev1", 00:11:31.040 "core_mask": "0x1", 00:11:31.040 "workload": "randrw", 00:11:31.040 "percentage": 50, 00:11:31.040 "status": "finished", 00:11:31.040 "queue_depth": 1, 00:11:31.040 "io_size": 131072, 00:11:31.040 "runtime": 1.375869, 00:11:31.040 "iops": 17635.399881820143, 00:11:31.040 "mibps": 2204.424985227518, 00:11:31.040 "io_failed": 0, 00:11:31.040 "io_timeout": 0, 00:11:31.040 "avg_latency_us": 53.49217760385397, 00:11:31.040 "min_latency_us": 25.3764192139738, 00:11:31.040 "max_latency_us": 1559.6995633187773 00:11:31.040 } 00:11:31.040 ], 00:11:31.040 "core_count": 1 00:11:31.040 } 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63954 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63954 ']' 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63954 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63954 00:11:31.040 killing process with pid 63954 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63954' 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63954 00:11:31.040 [2024-11-26 20:24:24.396340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.040 20:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63954 00:11:31.040 [2024-11-26 20:24:24.552039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PCETjjddT0 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:32.946 ************************************ 00:11:32.946 END TEST raid_write_error_test 00:11:32.946 ************************************ 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:32.946 00:11:32.946 real 0m4.781s 00:11:32.946 user 0m5.778s 00:11:32.946 sys 0m0.571s 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.946 20:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.946 20:24:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:32.946 20:24:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:32.946 20:24:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:32.946 20:24:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.946 20:24:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.946 20:24:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.946 ************************************ 00:11:32.946 START TEST raid_state_function_test 00:11:32.946 ************************************ 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.946 Process raid pid: 64097 00:11:32.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64097 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64097' 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64097 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64097 ']' 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.946 20:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.946 [2024-11-26 20:24:26.156874] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:32.946 [2024-11-26 20:24:26.157502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.946 [2024-11-26 20:24:26.340759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.946 [2024-11-26 20:24:26.480108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.205 [2024-11-26 20:24:26.728505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.205 [2024-11-26 20:24:26.728671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.772 [2024-11-26 20:24:27.077681] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.772 [2024-11-26 20:24:27.077813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.772 [2024-11-26 20:24:27.077857] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.772 [2024-11-26 20:24:27.077895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.772 [2024-11-26 20:24:27.077939] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.772 [2024-11-26 20:24:27.077968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.772 "name": "Existed_Raid", 00:11:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.772 "strip_size_kb": 64, 00:11:33.772 "state": "configuring", 00:11:33.772 "raid_level": "raid0", 00:11:33.772 "superblock": false, 00:11:33.772 "num_base_bdevs": 3, 00:11:33.772 "num_base_bdevs_discovered": 0, 00:11:33.772 "num_base_bdevs_operational": 3, 00:11:33.772 "base_bdevs_list": [ 00:11:33.772 { 00:11:33.772 "name": "BaseBdev1", 00:11:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.772 "is_configured": false, 00:11:33.772 "data_offset": 0, 00:11:33.772 "data_size": 0 00:11:33.772 }, 00:11:33.772 { 00:11:33.772 "name": "BaseBdev2", 00:11:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.772 "is_configured": false, 00:11:33.772 "data_offset": 0, 00:11:33.772 "data_size": 0 00:11:33.772 }, 00:11:33.772 { 00:11:33.772 "name": "BaseBdev3", 00:11:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.772 "is_configured": false, 00:11:33.772 "data_offset": 0, 00:11:33.772 "data_size": 0 00:11:33.772 } 00:11:33.772 ] 00:11:33.772 }' 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.772 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.090 [2024-11-26 20:24:27.544996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.090 [2024-11-26 20:24:27.545039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.090 [2024-11-26 20:24:27.556989] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.090 [2024-11-26 20:24:27.557048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.090 [2024-11-26 20:24:27.557059] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.090 [2024-11-26 20:24:27.557070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.090 [2024-11-26 20:24:27.557078] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.090 [2024-11-26 20:24:27.557089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.090 [2024-11-26 20:24:27.611198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.090 BaseBdev1 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.090 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.091 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.350 [ 00:11:34.350 { 00:11:34.350 "name": "BaseBdev1", 00:11:34.350 "aliases": [ 00:11:34.350 "a355c704-dc4a-430a-8368-8cd9d99df3eb" 00:11:34.350 ], 00:11:34.350 "product_name": "Malloc disk", 00:11:34.350 "block_size": 512, 00:11:34.350 "num_blocks": 65536, 00:11:34.350 "uuid": "a355c704-dc4a-430a-8368-8cd9d99df3eb", 00:11:34.350 "assigned_rate_limits": { 00:11:34.350 "rw_ios_per_sec": 0, 00:11:34.350 "rw_mbytes_per_sec": 0, 00:11:34.350 "r_mbytes_per_sec": 0, 00:11:34.350 "w_mbytes_per_sec": 0 00:11:34.350 }, 00:11:34.350 "claimed": true, 00:11:34.350 "claim_type": "exclusive_write", 00:11:34.350 "zoned": false, 00:11:34.350 "supported_io_types": { 00:11:34.350 "read": true, 00:11:34.350 "write": true, 00:11:34.350 "unmap": true, 00:11:34.350 "flush": true, 00:11:34.350 "reset": true, 00:11:34.350 "nvme_admin": false, 00:11:34.350 "nvme_io": false, 00:11:34.350 "nvme_io_md": false, 00:11:34.350 "write_zeroes": true, 00:11:34.350 "zcopy": true, 00:11:34.350 "get_zone_info": false, 00:11:34.350 "zone_management": false, 00:11:34.350 "zone_append": false, 00:11:34.350 "compare": false, 00:11:34.350 "compare_and_write": false, 00:11:34.350 "abort": true, 00:11:34.350 "seek_hole": false, 00:11:34.350 "seek_data": false, 00:11:34.350 "copy": true, 00:11:34.350 "nvme_iov_md": false 00:11:34.350 }, 00:11:34.350 "memory_domains": [ 00:11:34.350 { 00:11:34.350 "dma_device_id": "system", 00:11:34.351 "dma_device_type": 1 00:11:34.351 }, 00:11:34.351 { 00:11:34.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.351 "dma_device_type": 2 00:11:34.351 } 00:11:34.351 ], 00:11:34.351 "driver_specific": {} 00:11:34.351 } 00:11:34.351 ] 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.351 "name": "Existed_Raid", 00:11:34.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.351 "strip_size_kb": 64, 00:11:34.351 "state": "configuring", 00:11:34.351 "raid_level": "raid0", 00:11:34.351 "superblock": false, 00:11:34.351 "num_base_bdevs": 3, 00:11:34.351 "num_base_bdevs_discovered": 1, 00:11:34.351 "num_base_bdevs_operational": 3, 00:11:34.351 "base_bdevs_list": [ 00:11:34.351 { 00:11:34.351 "name": "BaseBdev1", 00:11:34.351 "uuid": "a355c704-dc4a-430a-8368-8cd9d99df3eb", 00:11:34.351 "is_configured": true, 00:11:34.351 "data_offset": 0, 00:11:34.351 "data_size": 65536 00:11:34.351 }, 00:11:34.351 { 00:11:34.351 "name": "BaseBdev2", 00:11:34.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.351 "is_configured": false, 00:11:34.351 "data_offset": 0, 00:11:34.351 "data_size": 0 00:11:34.351 }, 00:11:34.351 { 00:11:34.351 "name": "BaseBdev3", 00:11:34.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.351 "is_configured": false, 00:11:34.351 "data_offset": 0, 00:11:34.351 "data_size": 0 00:11:34.351 } 00:11:34.351 ] 00:11:34.351 }' 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.351 20:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.610 [2024-11-26 20:24:28.118422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.610 [2024-11-26 20:24:28.118578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.610 [2024-11-26 20:24:28.130519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.610 [2024-11-26 20:24:28.132697] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.610 [2024-11-26 20:24:28.132755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.610 [2024-11-26 20:24:28.132769] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.610 [2024-11-26 20:24:28.132780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.610 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.869 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.869 "name": "Existed_Raid", 00:11:34.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.869 "strip_size_kb": 64, 00:11:34.869 "state": "configuring", 00:11:34.869 "raid_level": "raid0", 00:11:34.869 "superblock": false, 00:11:34.869 "num_base_bdevs": 3, 00:11:34.869 "num_base_bdevs_discovered": 1, 00:11:34.869 "num_base_bdevs_operational": 3, 00:11:34.869 "base_bdevs_list": [ 00:11:34.869 { 00:11:34.869 "name": "BaseBdev1", 00:11:34.869 "uuid": "a355c704-dc4a-430a-8368-8cd9d99df3eb", 00:11:34.869 "is_configured": true, 00:11:34.869 "data_offset": 0, 00:11:34.869 "data_size": 65536 00:11:34.869 }, 00:11:34.869 { 00:11:34.869 "name": "BaseBdev2", 00:11:34.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.869 "is_configured": false, 00:11:34.869 "data_offset": 0, 00:11:34.869 "data_size": 0 00:11:34.869 }, 00:11:34.869 { 00:11:34.869 "name": "BaseBdev3", 00:11:34.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.869 "is_configured": false, 00:11:34.869 "data_offset": 0, 00:11:34.869 "data_size": 0 00:11:34.869 } 00:11:34.869 ] 00:11:34.869 }' 00:11:34.869 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.869 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.128 [2024-11-26 20:24:28.613722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.128 BaseBdev2 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.128 [ 00:11:35.128 { 00:11:35.128 "name": "BaseBdev2", 00:11:35.128 "aliases": [ 00:11:35.128 "21f33f04-9f97-455b-b7c9-13e826987d15" 00:11:35.128 ], 00:11:35.128 "product_name": "Malloc disk", 00:11:35.128 "block_size": 512, 00:11:35.128 "num_blocks": 65536, 00:11:35.128 "uuid": "21f33f04-9f97-455b-b7c9-13e826987d15", 00:11:35.128 "assigned_rate_limits": { 00:11:35.128 "rw_ios_per_sec": 0, 00:11:35.128 "rw_mbytes_per_sec": 0, 00:11:35.128 "r_mbytes_per_sec": 0, 00:11:35.128 "w_mbytes_per_sec": 0 00:11:35.128 }, 00:11:35.128 "claimed": true, 00:11:35.128 "claim_type": "exclusive_write", 00:11:35.128 "zoned": false, 00:11:35.128 "supported_io_types": { 00:11:35.128 "read": true, 00:11:35.128 "write": true, 00:11:35.128 "unmap": true, 00:11:35.128 "flush": true, 00:11:35.128 "reset": true, 00:11:35.128 "nvme_admin": false, 00:11:35.128 "nvme_io": false, 00:11:35.128 "nvme_io_md": false, 00:11:35.128 "write_zeroes": true, 00:11:35.128 "zcopy": true, 00:11:35.128 "get_zone_info": false, 00:11:35.128 "zone_management": false, 00:11:35.128 "zone_append": false, 00:11:35.128 "compare": false, 00:11:35.128 "compare_and_write": false, 00:11:35.128 "abort": true, 00:11:35.128 "seek_hole": false, 00:11:35.128 "seek_data": false, 00:11:35.128 "copy": true, 00:11:35.128 "nvme_iov_md": false 00:11:35.128 }, 00:11:35.128 "memory_domains": [ 00:11:35.128 { 00:11:35.128 "dma_device_id": "system", 00:11:35.128 "dma_device_type": 1 00:11:35.128 }, 00:11:35.128 { 00:11:35.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.128 "dma_device_type": 2 00:11:35.128 } 00:11:35.128 ], 00:11:35.128 "driver_specific": {} 00:11:35.128 } 00:11:35.128 ] 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.128 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.387 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.387 "name": "Existed_Raid", 00:11:35.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.387 "strip_size_kb": 64, 00:11:35.387 "state": "configuring", 00:11:35.387 "raid_level": "raid0", 00:11:35.388 "superblock": false, 00:11:35.388 "num_base_bdevs": 3, 00:11:35.388 "num_base_bdevs_discovered": 2, 00:11:35.388 "num_base_bdevs_operational": 3, 00:11:35.388 "base_bdevs_list": [ 00:11:35.388 { 00:11:35.388 "name": "BaseBdev1", 00:11:35.388 "uuid": "a355c704-dc4a-430a-8368-8cd9d99df3eb", 00:11:35.388 "is_configured": true, 00:11:35.388 "data_offset": 0, 00:11:35.388 "data_size": 65536 00:11:35.388 }, 00:11:35.388 { 00:11:35.388 "name": "BaseBdev2", 00:11:35.388 "uuid": "21f33f04-9f97-455b-b7c9-13e826987d15", 00:11:35.388 "is_configured": true, 00:11:35.388 "data_offset": 0, 00:11:35.388 "data_size": 65536 00:11:35.388 }, 00:11:35.388 { 00:11:35.388 "name": "BaseBdev3", 00:11:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.388 "is_configured": false, 00:11:35.388 "data_offset": 0, 00:11:35.388 "data_size": 0 00:11:35.388 } 00:11:35.388 ] 00:11:35.388 }' 00:11:35.388 20:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.388 20:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.648 [2024-11-26 20:24:29.154970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.648 [2024-11-26 20:24:29.155138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.648 [2024-11-26 20:24:29.155183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:35.648 [2024-11-26 20:24:29.155600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:35.648 [2024-11-26 20:24:29.155865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.648 [2024-11-26 20:24:29.155919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.648 [2024-11-26 20:24:29.156303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.648 BaseBdev3 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.648 [ 00:11:35.648 { 00:11:35.648 "name": "BaseBdev3", 00:11:35.648 "aliases": [ 00:11:35.648 "b08853e8-ca12-4b5f-bdc8-ddddab52f6c8" 00:11:35.648 ], 00:11:35.648 "product_name": "Malloc disk", 00:11:35.648 "block_size": 512, 00:11:35.648 "num_blocks": 65536, 00:11:35.648 "uuid": "b08853e8-ca12-4b5f-bdc8-ddddab52f6c8", 00:11:35.648 "assigned_rate_limits": { 00:11:35.648 "rw_ios_per_sec": 0, 00:11:35.648 "rw_mbytes_per_sec": 0, 00:11:35.648 "r_mbytes_per_sec": 0, 00:11:35.648 "w_mbytes_per_sec": 0 00:11:35.648 }, 00:11:35.648 "claimed": true, 00:11:35.648 "claim_type": "exclusive_write", 00:11:35.648 "zoned": false, 00:11:35.648 "supported_io_types": { 00:11:35.648 "read": true, 00:11:35.648 "write": true, 00:11:35.648 "unmap": true, 00:11:35.648 "flush": true, 00:11:35.648 "reset": true, 00:11:35.648 "nvme_admin": false, 00:11:35.648 "nvme_io": false, 00:11:35.648 "nvme_io_md": false, 00:11:35.648 "write_zeroes": true, 00:11:35.648 "zcopy": true, 00:11:35.648 "get_zone_info": false, 00:11:35.648 "zone_management": false, 00:11:35.648 "zone_append": false, 00:11:35.648 "compare": false, 00:11:35.648 "compare_and_write": false, 00:11:35.648 "abort": true, 00:11:35.648 "seek_hole": false, 00:11:35.648 "seek_data": false, 00:11:35.648 "copy": true, 00:11:35.648 "nvme_iov_md": false 00:11:35.648 }, 00:11:35.648 "memory_domains": [ 00:11:35.648 { 00:11:35.648 "dma_device_id": "system", 00:11:35.648 "dma_device_type": 1 00:11:35.648 }, 00:11:35.648 { 00:11:35.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.648 "dma_device_type": 2 00:11:35.648 } 00:11:35.648 ], 00:11:35.648 "driver_specific": {} 00:11:35.648 } 00:11:35.648 ] 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.648 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.906 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.906 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.906 "name": "Existed_Raid", 00:11:35.906 "uuid": "9a78f6d8-e9d1-43f9-be09-2b4f52650bb4", 00:11:35.906 "strip_size_kb": 64, 00:11:35.906 "state": "online", 00:11:35.906 "raid_level": "raid0", 00:11:35.906 "superblock": false, 00:11:35.906 "num_base_bdevs": 3, 00:11:35.906 "num_base_bdevs_discovered": 3, 00:11:35.907 "num_base_bdevs_operational": 3, 00:11:35.907 "base_bdevs_list": [ 00:11:35.907 { 00:11:35.907 "name": "BaseBdev1", 00:11:35.907 "uuid": "a355c704-dc4a-430a-8368-8cd9d99df3eb", 00:11:35.907 "is_configured": true, 00:11:35.907 "data_offset": 0, 00:11:35.907 "data_size": 65536 00:11:35.907 }, 00:11:35.907 { 00:11:35.907 "name": "BaseBdev2", 00:11:35.907 "uuid": "21f33f04-9f97-455b-b7c9-13e826987d15", 00:11:35.907 "is_configured": true, 00:11:35.907 "data_offset": 0, 00:11:35.907 "data_size": 65536 00:11:35.907 }, 00:11:35.907 { 00:11:35.907 "name": "BaseBdev3", 00:11:35.907 "uuid": "b08853e8-ca12-4b5f-bdc8-ddddab52f6c8", 00:11:35.907 "is_configured": true, 00:11:35.907 "data_offset": 0, 00:11:35.907 "data_size": 65536 00:11:35.907 } 00:11:35.907 ] 00:11:35.907 }' 00:11:35.907 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.907 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 [2024-11-26 20:24:29.582707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.165 "name": "Existed_Raid", 00:11:36.165 "aliases": [ 00:11:36.165 "9a78f6d8-e9d1-43f9-be09-2b4f52650bb4" 00:11:36.165 ], 00:11:36.165 "product_name": "Raid Volume", 00:11:36.165 "block_size": 512, 00:11:36.165 "num_blocks": 196608, 00:11:36.165 "uuid": "9a78f6d8-e9d1-43f9-be09-2b4f52650bb4", 00:11:36.165 "assigned_rate_limits": { 00:11:36.165 "rw_ios_per_sec": 0, 00:11:36.165 "rw_mbytes_per_sec": 0, 00:11:36.165 "r_mbytes_per_sec": 0, 00:11:36.165 "w_mbytes_per_sec": 0 00:11:36.165 }, 00:11:36.165 "claimed": false, 00:11:36.165 "zoned": false, 00:11:36.165 "supported_io_types": { 00:11:36.165 "read": true, 00:11:36.165 "write": true, 00:11:36.165 "unmap": true, 00:11:36.165 "flush": true, 00:11:36.165 "reset": true, 00:11:36.165 "nvme_admin": false, 00:11:36.165 "nvme_io": false, 00:11:36.165 "nvme_io_md": false, 00:11:36.165 "write_zeroes": true, 00:11:36.165 "zcopy": false, 00:11:36.165 "get_zone_info": false, 00:11:36.165 "zone_management": false, 00:11:36.165 "zone_append": false, 00:11:36.165 "compare": false, 00:11:36.165 "compare_and_write": false, 00:11:36.165 "abort": false, 00:11:36.165 "seek_hole": false, 00:11:36.165 "seek_data": false, 00:11:36.165 "copy": false, 00:11:36.165 "nvme_iov_md": false 00:11:36.165 }, 00:11:36.165 "memory_domains": [ 00:11:36.165 { 00:11:36.165 "dma_device_id": "system", 00:11:36.165 "dma_device_type": 1 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.165 "dma_device_type": 2 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "dma_device_id": "system", 00:11:36.165 "dma_device_type": 1 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.165 "dma_device_type": 2 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "dma_device_id": "system", 00:11:36.165 "dma_device_type": 1 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.165 "dma_device_type": 2 00:11:36.165 } 00:11:36.165 ], 00:11:36.165 "driver_specific": { 00:11:36.165 "raid": { 00:11:36.165 "uuid": "9a78f6d8-e9d1-43f9-be09-2b4f52650bb4", 00:11:36.165 "strip_size_kb": 64, 00:11:36.165 "state": "online", 00:11:36.165 "raid_level": "raid0", 00:11:36.165 "superblock": false, 00:11:36.165 "num_base_bdevs": 3, 00:11:36.165 "num_base_bdevs_discovered": 3, 00:11:36.165 "num_base_bdevs_operational": 3, 00:11:36.165 "base_bdevs_list": [ 00:11:36.165 { 00:11:36.165 "name": "BaseBdev1", 00:11:36.165 "uuid": "a355c704-dc4a-430a-8368-8cd9d99df3eb", 00:11:36.165 "is_configured": true, 00:11:36.165 "data_offset": 0, 00:11:36.165 "data_size": 65536 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "name": "BaseBdev2", 00:11:36.165 "uuid": "21f33f04-9f97-455b-b7c9-13e826987d15", 00:11:36.165 "is_configured": true, 00:11:36.165 "data_offset": 0, 00:11:36.165 "data_size": 65536 00:11:36.165 }, 00:11:36.165 { 00:11:36.165 "name": "BaseBdev3", 00:11:36.165 "uuid": "b08853e8-ca12-4b5f-bdc8-ddddab52f6c8", 00:11:36.165 "is_configured": true, 00:11:36.165 "data_offset": 0, 00:11:36.165 "data_size": 65536 00:11:36.165 } 00:11:36.165 ] 00:11:36.165 } 00:11:36.165 } 00:11:36.165 }' 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.165 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.165 BaseBdev2 00:11:36.165 BaseBdev3' 00:11:36.166 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.424 20:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.424 [2024-11-26 20:24:29.885887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.424 [2024-11-26 20:24:29.885922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.424 [2024-11-26 20:24:29.885984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.683 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.683 "name": "Existed_Raid", 00:11:36.683 "uuid": "9a78f6d8-e9d1-43f9-be09-2b4f52650bb4", 00:11:36.683 "strip_size_kb": 64, 00:11:36.683 "state": "offline", 00:11:36.683 "raid_level": "raid0", 00:11:36.683 "superblock": false, 00:11:36.683 "num_base_bdevs": 3, 00:11:36.683 "num_base_bdevs_discovered": 2, 00:11:36.683 "num_base_bdevs_operational": 2, 00:11:36.683 "base_bdevs_list": [ 00:11:36.683 { 00:11:36.683 "name": null, 00:11:36.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.683 "is_configured": false, 00:11:36.683 "data_offset": 0, 00:11:36.683 "data_size": 65536 00:11:36.683 }, 00:11:36.683 { 00:11:36.683 "name": "BaseBdev2", 00:11:36.683 "uuid": "21f33f04-9f97-455b-b7c9-13e826987d15", 00:11:36.683 "is_configured": true, 00:11:36.683 "data_offset": 0, 00:11:36.683 "data_size": 65536 00:11:36.683 }, 00:11:36.683 { 00:11:36.683 "name": "BaseBdev3", 00:11:36.683 "uuid": "b08853e8-ca12-4b5f-bdc8-ddddab52f6c8", 00:11:36.683 "is_configured": true, 00:11:36.683 "data_offset": 0, 00:11:36.683 "data_size": 65536 00:11:36.684 } 00:11:36.684 ] 00:11:36.684 }' 00:11:36.684 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.684 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.942 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.942 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.199 [2024-11-26 20:24:30.550768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.199 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.200 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.200 [2024-11-26 20:24:30.728672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.200 [2024-11-26 20:24:30.728786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.457 BaseBdev2 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.457 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.458 [ 00:11:37.458 { 00:11:37.458 "name": "BaseBdev2", 00:11:37.458 "aliases": [ 00:11:37.458 "35d67754-4898-4c1f-93e2-8fb20da98650" 00:11:37.458 ], 00:11:37.458 "product_name": "Malloc disk", 00:11:37.458 "block_size": 512, 00:11:37.458 "num_blocks": 65536, 00:11:37.458 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:37.458 "assigned_rate_limits": { 00:11:37.458 "rw_ios_per_sec": 0, 00:11:37.458 "rw_mbytes_per_sec": 0, 00:11:37.458 "r_mbytes_per_sec": 0, 00:11:37.458 "w_mbytes_per_sec": 0 00:11:37.458 }, 00:11:37.458 "claimed": false, 00:11:37.458 "zoned": false, 00:11:37.458 "supported_io_types": { 00:11:37.458 "read": true, 00:11:37.458 "write": true, 00:11:37.458 "unmap": true, 00:11:37.458 "flush": true, 00:11:37.458 "reset": true, 00:11:37.458 "nvme_admin": false, 00:11:37.458 "nvme_io": false, 00:11:37.458 "nvme_io_md": false, 00:11:37.458 "write_zeroes": true, 00:11:37.458 "zcopy": true, 00:11:37.458 "get_zone_info": false, 00:11:37.458 "zone_management": false, 00:11:37.458 "zone_append": false, 00:11:37.458 "compare": false, 00:11:37.458 "compare_and_write": false, 00:11:37.458 "abort": true, 00:11:37.458 "seek_hole": false, 00:11:37.458 "seek_data": false, 00:11:37.458 "copy": true, 00:11:37.458 "nvme_iov_md": false 00:11:37.458 }, 00:11:37.458 "memory_domains": [ 00:11:37.458 { 00:11:37.458 "dma_device_id": "system", 00:11:37.458 "dma_device_type": 1 00:11:37.458 }, 00:11:37.458 { 00:11:37.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.458 "dma_device_type": 2 00:11:37.458 } 00:11:37.458 ], 00:11:37.458 "driver_specific": {} 00:11:37.458 } 00:11:37.458 ] 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.458 20:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.715 BaseBdev3 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.715 [ 00:11:37.715 { 00:11:37.715 "name": "BaseBdev3", 00:11:37.715 "aliases": [ 00:11:37.715 "7ac39959-6e80-4ff7-8822-04cd9ec1482a" 00:11:37.715 ], 00:11:37.715 "product_name": "Malloc disk", 00:11:37.715 "block_size": 512, 00:11:37.715 "num_blocks": 65536, 00:11:37.715 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:37.715 "assigned_rate_limits": { 00:11:37.715 "rw_ios_per_sec": 0, 00:11:37.715 "rw_mbytes_per_sec": 0, 00:11:37.715 "r_mbytes_per_sec": 0, 00:11:37.715 "w_mbytes_per_sec": 0 00:11:37.715 }, 00:11:37.715 "claimed": false, 00:11:37.715 "zoned": false, 00:11:37.715 "supported_io_types": { 00:11:37.715 "read": true, 00:11:37.715 "write": true, 00:11:37.715 "unmap": true, 00:11:37.715 "flush": true, 00:11:37.715 "reset": true, 00:11:37.715 "nvme_admin": false, 00:11:37.715 "nvme_io": false, 00:11:37.715 "nvme_io_md": false, 00:11:37.715 "write_zeroes": true, 00:11:37.715 "zcopy": true, 00:11:37.715 "get_zone_info": false, 00:11:37.715 "zone_management": false, 00:11:37.715 "zone_append": false, 00:11:37.715 "compare": false, 00:11:37.715 "compare_and_write": false, 00:11:37.715 "abort": true, 00:11:37.715 "seek_hole": false, 00:11:37.715 "seek_data": false, 00:11:37.715 "copy": true, 00:11:37.715 "nvme_iov_md": false 00:11:37.715 }, 00:11:37.715 "memory_domains": [ 00:11:37.715 { 00:11:37.715 "dma_device_id": "system", 00:11:37.715 "dma_device_type": 1 00:11:37.715 }, 00:11:37.715 { 00:11:37.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.715 "dma_device_type": 2 00:11:37.715 } 00:11:37.715 ], 00:11:37.715 "driver_specific": {} 00:11:37.715 } 00:11:37.715 ] 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.715 [2024-11-26 20:24:31.086919] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.715 [2024-11-26 20:24:31.087063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.715 [2024-11-26 20:24:31.087107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.715 [2024-11-26 20:24:31.089537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.715 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.716 "name": "Existed_Raid", 00:11:37.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.716 "strip_size_kb": 64, 00:11:37.716 "state": "configuring", 00:11:37.716 "raid_level": "raid0", 00:11:37.716 "superblock": false, 00:11:37.716 "num_base_bdevs": 3, 00:11:37.716 "num_base_bdevs_discovered": 2, 00:11:37.716 "num_base_bdevs_operational": 3, 00:11:37.716 "base_bdevs_list": [ 00:11:37.716 { 00:11:37.716 "name": "BaseBdev1", 00:11:37.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.716 "is_configured": false, 00:11:37.716 "data_offset": 0, 00:11:37.716 "data_size": 0 00:11:37.716 }, 00:11:37.716 { 00:11:37.716 "name": "BaseBdev2", 00:11:37.716 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:37.716 "is_configured": true, 00:11:37.716 "data_offset": 0, 00:11:37.716 "data_size": 65536 00:11:37.716 }, 00:11:37.716 { 00:11:37.716 "name": "BaseBdev3", 00:11:37.716 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:37.716 "is_configured": true, 00:11:37.716 "data_offset": 0, 00:11:37.716 "data_size": 65536 00:11:37.716 } 00:11:37.716 ] 00:11:37.716 }' 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.716 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.280 [2024-11-26 20:24:31.610191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.280 "name": "Existed_Raid", 00:11:38.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.280 "strip_size_kb": 64, 00:11:38.280 "state": "configuring", 00:11:38.280 "raid_level": "raid0", 00:11:38.280 "superblock": false, 00:11:38.280 "num_base_bdevs": 3, 00:11:38.280 "num_base_bdevs_discovered": 1, 00:11:38.280 "num_base_bdevs_operational": 3, 00:11:38.280 "base_bdevs_list": [ 00:11:38.280 { 00:11:38.280 "name": "BaseBdev1", 00:11:38.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.280 "is_configured": false, 00:11:38.280 "data_offset": 0, 00:11:38.280 "data_size": 0 00:11:38.280 }, 00:11:38.280 { 00:11:38.280 "name": null, 00:11:38.280 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:38.280 "is_configured": false, 00:11:38.280 "data_offset": 0, 00:11:38.280 "data_size": 65536 00:11:38.280 }, 00:11:38.280 { 00:11:38.280 "name": "BaseBdev3", 00:11:38.280 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:38.280 "is_configured": true, 00:11:38.280 "data_offset": 0, 00:11:38.280 "data_size": 65536 00:11:38.280 } 00:11:38.280 ] 00:11:38.280 }' 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.280 20:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.537 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.537 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.537 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.537 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.537 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.795 [2024-11-26 20:24:32.165057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.795 BaseBdev1 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.795 [ 00:11:38.795 { 00:11:38.795 "name": "BaseBdev1", 00:11:38.795 "aliases": [ 00:11:38.795 "e965009b-5257-46f7-8230-64d8822176b8" 00:11:38.795 ], 00:11:38.795 "product_name": "Malloc disk", 00:11:38.795 "block_size": 512, 00:11:38.795 "num_blocks": 65536, 00:11:38.795 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:38.795 "assigned_rate_limits": { 00:11:38.795 "rw_ios_per_sec": 0, 00:11:38.795 "rw_mbytes_per_sec": 0, 00:11:38.795 "r_mbytes_per_sec": 0, 00:11:38.795 "w_mbytes_per_sec": 0 00:11:38.795 }, 00:11:38.795 "claimed": true, 00:11:38.795 "claim_type": "exclusive_write", 00:11:38.795 "zoned": false, 00:11:38.795 "supported_io_types": { 00:11:38.795 "read": true, 00:11:38.795 "write": true, 00:11:38.795 "unmap": true, 00:11:38.795 "flush": true, 00:11:38.795 "reset": true, 00:11:38.795 "nvme_admin": false, 00:11:38.795 "nvme_io": false, 00:11:38.795 "nvme_io_md": false, 00:11:38.795 "write_zeroes": true, 00:11:38.795 "zcopy": true, 00:11:38.795 "get_zone_info": false, 00:11:38.795 "zone_management": false, 00:11:38.795 "zone_append": false, 00:11:38.795 "compare": false, 00:11:38.795 "compare_and_write": false, 00:11:38.795 "abort": true, 00:11:38.795 "seek_hole": false, 00:11:38.795 "seek_data": false, 00:11:38.795 "copy": true, 00:11:38.795 "nvme_iov_md": false 00:11:38.795 }, 00:11:38.795 "memory_domains": [ 00:11:38.795 { 00:11:38.795 "dma_device_id": "system", 00:11:38.795 "dma_device_type": 1 00:11:38.795 }, 00:11:38.795 { 00:11:38.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.795 "dma_device_type": 2 00:11:38.795 } 00:11:38.795 ], 00:11:38.795 "driver_specific": {} 00:11:38.795 } 00:11:38.795 ] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.795 "name": "Existed_Raid", 00:11:38.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.795 "strip_size_kb": 64, 00:11:38.795 "state": "configuring", 00:11:38.795 "raid_level": "raid0", 00:11:38.795 "superblock": false, 00:11:38.795 "num_base_bdevs": 3, 00:11:38.795 "num_base_bdevs_discovered": 2, 00:11:38.795 "num_base_bdevs_operational": 3, 00:11:38.795 "base_bdevs_list": [ 00:11:38.795 { 00:11:38.795 "name": "BaseBdev1", 00:11:38.795 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:38.795 "is_configured": true, 00:11:38.795 "data_offset": 0, 00:11:38.795 "data_size": 65536 00:11:38.795 }, 00:11:38.795 { 00:11:38.795 "name": null, 00:11:38.795 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:38.795 "is_configured": false, 00:11:38.795 "data_offset": 0, 00:11:38.795 "data_size": 65536 00:11:38.795 }, 00:11:38.795 { 00:11:38.795 "name": "BaseBdev3", 00:11:38.795 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:38.795 "is_configured": true, 00:11:38.795 "data_offset": 0, 00:11:38.795 "data_size": 65536 00:11:38.795 } 00:11:38.795 ] 00:11:38.795 }' 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.795 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.359 [2024-11-26 20:24:32.712453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.359 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.360 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.360 "name": "Existed_Raid", 00:11:39.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.360 "strip_size_kb": 64, 00:11:39.360 "state": "configuring", 00:11:39.360 "raid_level": "raid0", 00:11:39.360 "superblock": false, 00:11:39.360 "num_base_bdevs": 3, 00:11:39.360 "num_base_bdevs_discovered": 1, 00:11:39.360 "num_base_bdevs_operational": 3, 00:11:39.360 "base_bdevs_list": [ 00:11:39.360 { 00:11:39.360 "name": "BaseBdev1", 00:11:39.360 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:39.360 "is_configured": true, 00:11:39.360 "data_offset": 0, 00:11:39.360 "data_size": 65536 00:11:39.360 }, 00:11:39.360 { 00:11:39.360 "name": null, 00:11:39.360 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:39.360 "is_configured": false, 00:11:39.360 "data_offset": 0, 00:11:39.360 "data_size": 65536 00:11:39.360 }, 00:11:39.360 { 00:11:39.360 "name": null, 00:11:39.360 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:39.360 "is_configured": false, 00:11:39.360 "data_offset": 0, 00:11:39.360 "data_size": 65536 00:11:39.360 } 00:11:39.360 ] 00:11:39.360 }' 00:11:39.360 20:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.360 20:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.617 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.617 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.617 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.617 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.875 [2024-11-26 20:24:33.207648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.875 "name": "Existed_Raid", 00:11:39.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.875 "strip_size_kb": 64, 00:11:39.875 "state": "configuring", 00:11:39.875 "raid_level": "raid0", 00:11:39.875 "superblock": false, 00:11:39.875 "num_base_bdevs": 3, 00:11:39.875 "num_base_bdevs_discovered": 2, 00:11:39.875 "num_base_bdevs_operational": 3, 00:11:39.875 "base_bdevs_list": [ 00:11:39.875 { 00:11:39.875 "name": "BaseBdev1", 00:11:39.875 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:39.875 "is_configured": true, 00:11:39.875 "data_offset": 0, 00:11:39.875 "data_size": 65536 00:11:39.875 }, 00:11:39.875 { 00:11:39.875 "name": null, 00:11:39.875 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:39.875 "is_configured": false, 00:11:39.875 "data_offset": 0, 00:11:39.875 "data_size": 65536 00:11:39.875 }, 00:11:39.875 { 00:11:39.875 "name": "BaseBdev3", 00:11:39.875 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:39.875 "is_configured": true, 00:11:39.875 "data_offset": 0, 00:11:39.875 "data_size": 65536 00:11:39.875 } 00:11:39.875 ] 00:11:39.875 }' 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.875 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.443 [2024-11-26 20:24:33.782879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:40.443 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.444 "name": "Existed_Raid", 00:11:40.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.444 "strip_size_kb": 64, 00:11:40.444 "state": "configuring", 00:11:40.444 "raid_level": "raid0", 00:11:40.444 "superblock": false, 00:11:40.444 "num_base_bdevs": 3, 00:11:40.444 "num_base_bdevs_discovered": 1, 00:11:40.444 "num_base_bdevs_operational": 3, 00:11:40.444 "base_bdevs_list": [ 00:11:40.444 { 00:11:40.444 "name": null, 00:11:40.444 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:40.444 "is_configured": false, 00:11:40.444 "data_offset": 0, 00:11:40.444 "data_size": 65536 00:11:40.444 }, 00:11:40.444 { 00:11:40.444 "name": null, 00:11:40.444 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:40.444 "is_configured": false, 00:11:40.444 "data_offset": 0, 00:11:40.444 "data_size": 65536 00:11:40.444 }, 00:11:40.444 { 00:11:40.444 "name": "BaseBdev3", 00:11:40.444 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:40.444 "is_configured": true, 00:11:40.444 "data_offset": 0, 00:11:40.444 "data_size": 65536 00:11:40.444 } 00:11:40.444 ] 00:11:40.444 }' 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.444 20:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 [2024-11-26 20:24:34.439326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.014 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.015 "name": "Existed_Raid", 00:11:41.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.015 "strip_size_kb": 64, 00:11:41.015 "state": "configuring", 00:11:41.015 "raid_level": "raid0", 00:11:41.015 "superblock": false, 00:11:41.015 "num_base_bdevs": 3, 00:11:41.015 "num_base_bdevs_discovered": 2, 00:11:41.015 "num_base_bdevs_operational": 3, 00:11:41.015 "base_bdevs_list": [ 00:11:41.015 { 00:11:41.015 "name": null, 00:11:41.015 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:41.015 "is_configured": false, 00:11:41.015 "data_offset": 0, 00:11:41.015 "data_size": 65536 00:11:41.015 }, 00:11:41.015 { 00:11:41.015 "name": "BaseBdev2", 00:11:41.015 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:41.015 "is_configured": true, 00:11:41.015 "data_offset": 0, 00:11:41.015 "data_size": 65536 00:11:41.015 }, 00:11:41.015 { 00:11:41.015 "name": "BaseBdev3", 00:11:41.015 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:41.015 "is_configured": true, 00:11:41.015 "data_offset": 0, 00:11:41.015 "data_size": 65536 00:11:41.015 } 00:11:41.015 ] 00:11:41.015 }' 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.015 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.583 20:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e965009b-5257-46f7-8230-64d8822176b8 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.583 [2024-11-26 20:24:35.086560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.583 [2024-11-26 20:24:35.086619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.583 [2024-11-26 20:24:35.086631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:41.583 [2024-11-26 20:24:35.086923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:41.583 [2024-11-26 20:24:35.087099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.583 [2024-11-26 20:24:35.087111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.583 [2024-11-26 20:24:35.087434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.583 NewBaseBdev 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.583 [ 00:11:41.583 { 00:11:41.583 "name": "NewBaseBdev", 00:11:41.583 "aliases": [ 00:11:41.583 "e965009b-5257-46f7-8230-64d8822176b8" 00:11:41.583 ], 00:11:41.583 "product_name": "Malloc disk", 00:11:41.583 "block_size": 512, 00:11:41.583 "num_blocks": 65536, 00:11:41.583 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:41.583 "assigned_rate_limits": { 00:11:41.583 "rw_ios_per_sec": 0, 00:11:41.583 "rw_mbytes_per_sec": 0, 00:11:41.583 "r_mbytes_per_sec": 0, 00:11:41.583 "w_mbytes_per_sec": 0 00:11:41.583 }, 00:11:41.583 "claimed": true, 00:11:41.583 "claim_type": "exclusive_write", 00:11:41.583 "zoned": false, 00:11:41.583 "supported_io_types": { 00:11:41.583 "read": true, 00:11:41.583 "write": true, 00:11:41.583 "unmap": true, 00:11:41.583 "flush": true, 00:11:41.583 "reset": true, 00:11:41.583 "nvme_admin": false, 00:11:41.583 "nvme_io": false, 00:11:41.583 "nvme_io_md": false, 00:11:41.583 "write_zeroes": true, 00:11:41.583 "zcopy": true, 00:11:41.583 "get_zone_info": false, 00:11:41.583 "zone_management": false, 00:11:41.583 "zone_append": false, 00:11:41.583 "compare": false, 00:11:41.583 "compare_and_write": false, 00:11:41.583 "abort": true, 00:11:41.583 "seek_hole": false, 00:11:41.583 "seek_data": false, 00:11:41.583 "copy": true, 00:11:41.583 "nvme_iov_md": false 00:11:41.583 }, 00:11:41.583 "memory_domains": [ 00:11:41.583 { 00:11:41.583 "dma_device_id": "system", 00:11:41.583 "dma_device_type": 1 00:11:41.583 }, 00:11:41.583 { 00:11:41.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.583 "dma_device_type": 2 00:11:41.583 } 00:11:41.583 ], 00:11:41.583 "driver_specific": {} 00:11:41.583 } 00:11:41.583 ] 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.583 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.584 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.584 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.584 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.843 "name": "Existed_Raid", 00:11:41.843 "uuid": "1430444e-411a-4b3e-b176-eafe449f2867", 00:11:41.843 "strip_size_kb": 64, 00:11:41.843 "state": "online", 00:11:41.843 "raid_level": "raid0", 00:11:41.843 "superblock": false, 00:11:41.843 "num_base_bdevs": 3, 00:11:41.843 "num_base_bdevs_discovered": 3, 00:11:41.843 "num_base_bdevs_operational": 3, 00:11:41.843 "base_bdevs_list": [ 00:11:41.843 { 00:11:41.843 "name": "NewBaseBdev", 00:11:41.843 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:41.843 "is_configured": true, 00:11:41.843 "data_offset": 0, 00:11:41.843 "data_size": 65536 00:11:41.843 }, 00:11:41.843 { 00:11:41.843 "name": "BaseBdev2", 00:11:41.843 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:41.843 "is_configured": true, 00:11:41.843 "data_offset": 0, 00:11:41.843 "data_size": 65536 00:11:41.843 }, 00:11:41.843 { 00:11:41.843 "name": "BaseBdev3", 00:11:41.843 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:41.843 "is_configured": true, 00:11:41.843 "data_offset": 0, 00:11:41.843 "data_size": 65536 00:11:41.843 } 00:11:41.843 ] 00:11:41.843 }' 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.843 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.102 [2024-11-26 20:24:35.614232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.102 "name": "Existed_Raid", 00:11:42.102 "aliases": [ 00:11:42.102 "1430444e-411a-4b3e-b176-eafe449f2867" 00:11:42.102 ], 00:11:42.102 "product_name": "Raid Volume", 00:11:42.102 "block_size": 512, 00:11:42.102 "num_blocks": 196608, 00:11:42.102 "uuid": "1430444e-411a-4b3e-b176-eafe449f2867", 00:11:42.102 "assigned_rate_limits": { 00:11:42.102 "rw_ios_per_sec": 0, 00:11:42.102 "rw_mbytes_per_sec": 0, 00:11:42.102 "r_mbytes_per_sec": 0, 00:11:42.102 "w_mbytes_per_sec": 0 00:11:42.102 }, 00:11:42.102 "claimed": false, 00:11:42.102 "zoned": false, 00:11:42.102 "supported_io_types": { 00:11:42.102 "read": true, 00:11:42.102 "write": true, 00:11:42.102 "unmap": true, 00:11:42.102 "flush": true, 00:11:42.102 "reset": true, 00:11:42.102 "nvme_admin": false, 00:11:42.102 "nvme_io": false, 00:11:42.102 "nvme_io_md": false, 00:11:42.102 "write_zeroes": true, 00:11:42.102 "zcopy": false, 00:11:42.102 "get_zone_info": false, 00:11:42.102 "zone_management": false, 00:11:42.102 "zone_append": false, 00:11:42.102 "compare": false, 00:11:42.102 "compare_and_write": false, 00:11:42.102 "abort": false, 00:11:42.102 "seek_hole": false, 00:11:42.102 "seek_data": false, 00:11:42.102 "copy": false, 00:11:42.102 "nvme_iov_md": false 00:11:42.102 }, 00:11:42.102 "memory_domains": [ 00:11:42.102 { 00:11:42.102 "dma_device_id": "system", 00:11:42.102 "dma_device_type": 1 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.102 "dma_device_type": 2 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "dma_device_id": "system", 00:11:42.102 "dma_device_type": 1 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.102 "dma_device_type": 2 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "dma_device_id": "system", 00:11:42.102 "dma_device_type": 1 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.102 "dma_device_type": 2 00:11:42.102 } 00:11:42.102 ], 00:11:42.102 "driver_specific": { 00:11:42.102 "raid": { 00:11:42.102 "uuid": "1430444e-411a-4b3e-b176-eafe449f2867", 00:11:42.102 "strip_size_kb": 64, 00:11:42.102 "state": "online", 00:11:42.102 "raid_level": "raid0", 00:11:42.102 "superblock": false, 00:11:42.102 "num_base_bdevs": 3, 00:11:42.102 "num_base_bdevs_discovered": 3, 00:11:42.102 "num_base_bdevs_operational": 3, 00:11:42.102 "base_bdevs_list": [ 00:11:42.102 { 00:11:42.102 "name": "NewBaseBdev", 00:11:42.102 "uuid": "e965009b-5257-46f7-8230-64d8822176b8", 00:11:42.102 "is_configured": true, 00:11:42.102 "data_offset": 0, 00:11:42.102 "data_size": 65536 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "name": "BaseBdev2", 00:11:42.102 "uuid": "35d67754-4898-4c1f-93e2-8fb20da98650", 00:11:42.102 "is_configured": true, 00:11:42.102 "data_offset": 0, 00:11:42.102 "data_size": 65536 00:11:42.102 }, 00:11:42.102 { 00:11:42.102 "name": "BaseBdev3", 00:11:42.102 "uuid": "7ac39959-6e80-4ff7-8822-04cd9ec1482a", 00:11:42.102 "is_configured": true, 00:11:42.102 "data_offset": 0, 00:11:42.102 "data_size": 65536 00:11:42.102 } 00:11:42.102 ] 00:11:42.102 } 00:11:42.102 } 00:11:42.102 }' 00:11:42.102 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.362 BaseBdev2 00:11:42.362 BaseBdev3' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 [2024-11-26 20:24:35.861484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.362 [2024-11-26 20:24:35.861606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.362 [2024-11-26 20:24:35.861714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.362 [2024-11-26 20:24:35.861781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.362 [2024-11-26 20:24:35.861795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64097 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64097 ']' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64097 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64097 00:11:42.362 killing process with pid 64097 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64097' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64097 00:11:42.362 [2024-11-26 20:24:35.905379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64097 00:11:42.930 [2024-11-26 20:24:36.278777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.307 00:11:44.307 real 0m11.558s 00:11:44.307 user 0m18.244s 00:11:44.307 sys 0m1.922s 00:11:44.307 ************************************ 00:11:44.307 END TEST raid_state_function_test 00:11:44.307 ************************************ 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.307 20:24:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:44.307 20:24:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.307 20:24:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.307 20:24:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.307 ************************************ 00:11:44.307 START TEST raid_state_function_test_sb 00:11:44.307 ************************************ 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:44.307 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64730 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64730' 00:11:44.308 Process raid pid: 64730 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64730 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64730 ']' 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.308 20:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 [2024-11-26 20:24:37.783026] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:44.308 [2024-11-26 20:24:37.783163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.566 [2024-11-26 20:24:37.963313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.566 [2024-11-26 20:24:38.100414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.827 [2024-11-26 20:24:38.348397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.827 [2024-11-26 20:24:38.348466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 [2024-11-26 20:24:38.708414] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.394 [2024-11-26 20:24:38.708476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.394 [2024-11-26 20:24:38.708489] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.394 [2024-11-26 20:24:38.708501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.394 [2024-11-26 20:24:38.708509] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.394 [2024-11-26 20:24:38.708519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.394 "name": "Existed_Raid", 00:11:45.394 "uuid": "d5239000-170e-49a1-b7f8-3d49e2ec164e", 00:11:45.394 "strip_size_kb": 64, 00:11:45.394 "state": "configuring", 00:11:45.394 "raid_level": "raid0", 00:11:45.394 "superblock": true, 00:11:45.394 "num_base_bdevs": 3, 00:11:45.394 "num_base_bdevs_discovered": 0, 00:11:45.394 "num_base_bdevs_operational": 3, 00:11:45.394 "base_bdevs_list": [ 00:11:45.394 { 00:11:45.394 "name": "BaseBdev1", 00:11:45.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.394 "is_configured": false, 00:11:45.394 "data_offset": 0, 00:11:45.394 "data_size": 0 00:11:45.394 }, 00:11:45.394 { 00:11:45.394 "name": "BaseBdev2", 00:11:45.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.394 "is_configured": false, 00:11:45.394 "data_offset": 0, 00:11:45.394 "data_size": 0 00:11:45.394 }, 00:11:45.394 { 00:11:45.394 "name": "BaseBdev3", 00:11:45.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.394 "is_configured": false, 00:11:45.394 "data_offset": 0, 00:11:45.394 "data_size": 0 00:11:45.394 } 00:11:45.394 ] 00:11:45.394 }' 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.394 20:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 [2024-11-26 20:24:39.155577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.653 [2024-11-26 20:24:39.155700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.653 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.653 [2024-11-26 20:24:39.163602] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.653 [2024-11-26 20:24:39.163720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.653 [2024-11-26 20:24:39.163760] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.653 [2024-11-26 20:24:39.163790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.653 [2024-11-26 20:24:39.163824] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.654 [2024-11-26 20:24:39.163851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.654 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.654 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.654 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.654 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.912 [2024-11-26 20:24:39.218457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.912 BaseBdev1 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.912 [ 00:11:45.912 { 00:11:45.912 "name": "BaseBdev1", 00:11:45.912 "aliases": [ 00:11:45.912 "a4ae06b6-9637-4178-9449-37f9dbec079b" 00:11:45.912 ], 00:11:45.912 "product_name": "Malloc disk", 00:11:45.912 "block_size": 512, 00:11:45.912 "num_blocks": 65536, 00:11:45.912 "uuid": "a4ae06b6-9637-4178-9449-37f9dbec079b", 00:11:45.912 "assigned_rate_limits": { 00:11:45.912 "rw_ios_per_sec": 0, 00:11:45.912 "rw_mbytes_per_sec": 0, 00:11:45.912 "r_mbytes_per_sec": 0, 00:11:45.912 "w_mbytes_per_sec": 0 00:11:45.912 }, 00:11:45.912 "claimed": true, 00:11:45.912 "claim_type": "exclusive_write", 00:11:45.912 "zoned": false, 00:11:45.912 "supported_io_types": { 00:11:45.912 "read": true, 00:11:45.912 "write": true, 00:11:45.912 "unmap": true, 00:11:45.912 "flush": true, 00:11:45.912 "reset": true, 00:11:45.912 "nvme_admin": false, 00:11:45.912 "nvme_io": false, 00:11:45.912 "nvme_io_md": false, 00:11:45.912 "write_zeroes": true, 00:11:45.912 "zcopy": true, 00:11:45.912 "get_zone_info": false, 00:11:45.912 "zone_management": false, 00:11:45.912 "zone_append": false, 00:11:45.912 "compare": false, 00:11:45.912 "compare_and_write": false, 00:11:45.912 "abort": true, 00:11:45.912 "seek_hole": false, 00:11:45.912 "seek_data": false, 00:11:45.912 "copy": true, 00:11:45.912 "nvme_iov_md": false 00:11:45.912 }, 00:11:45.912 "memory_domains": [ 00:11:45.912 { 00:11:45.912 "dma_device_id": "system", 00:11:45.912 "dma_device_type": 1 00:11:45.912 }, 00:11:45.912 { 00:11:45.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.912 "dma_device_type": 2 00:11:45.912 } 00:11:45.912 ], 00:11:45.912 "driver_specific": {} 00:11:45.912 } 00:11:45.912 ] 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.912 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.913 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.913 "name": "Existed_Raid", 00:11:45.913 "uuid": "faf7c33f-2777-46f2-b8f8-27c0687a45a9", 00:11:45.913 "strip_size_kb": 64, 00:11:45.913 "state": "configuring", 00:11:45.913 "raid_level": "raid0", 00:11:45.913 "superblock": true, 00:11:45.913 "num_base_bdevs": 3, 00:11:45.913 "num_base_bdevs_discovered": 1, 00:11:45.913 "num_base_bdevs_operational": 3, 00:11:45.913 "base_bdevs_list": [ 00:11:45.913 { 00:11:45.913 "name": "BaseBdev1", 00:11:45.913 "uuid": "a4ae06b6-9637-4178-9449-37f9dbec079b", 00:11:45.913 "is_configured": true, 00:11:45.913 "data_offset": 2048, 00:11:45.913 "data_size": 63488 00:11:45.913 }, 00:11:45.913 { 00:11:45.913 "name": "BaseBdev2", 00:11:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.913 "is_configured": false, 00:11:45.913 "data_offset": 0, 00:11:45.913 "data_size": 0 00:11:45.913 }, 00:11:45.913 { 00:11:45.913 "name": "BaseBdev3", 00:11:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.913 "is_configured": false, 00:11:45.913 "data_offset": 0, 00:11:45.913 "data_size": 0 00:11:45.913 } 00:11:45.913 ] 00:11:45.913 }' 00:11:45.913 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.913 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.482 [2024-11-26 20:24:39.733895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.482 [2024-11-26 20:24:39.734065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.482 [2024-11-26 20:24:39.745959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.482 [2024-11-26 20:24:39.748181] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.482 [2024-11-26 20:24:39.748254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.482 [2024-11-26 20:24:39.748268] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.482 [2024-11-26 20:24:39.748280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.482 "name": "Existed_Raid", 00:11:46.482 "uuid": "1735c174-6e94-4e86-a663-494bb4c36d6d", 00:11:46.482 "strip_size_kb": 64, 00:11:46.482 "state": "configuring", 00:11:46.482 "raid_level": "raid0", 00:11:46.482 "superblock": true, 00:11:46.482 "num_base_bdevs": 3, 00:11:46.482 "num_base_bdevs_discovered": 1, 00:11:46.482 "num_base_bdevs_operational": 3, 00:11:46.482 "base_bdevs_list": [ 00:11:46.482 { 00:11:46.482 "name": "BaseBdev1", 00:11:46.482 "uuid": "a4ae06b6-9637-4178-9449-37f9dbec079b", 00:11:46.482 "is_configured": true, 00:11:46.482 "data_offset": 2048, 00:11:46.482 "data_size": 63488 00:11:46.482 }, 00:11:46.482 { 00:11:46.482 "name": "BaseBdev2", 00:11:46.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.482 "is_configured": false, 00:11:46.482 "data_offset": 0, 00:11:46.482 "data_size": 0 00:11:46.482 }, 00:11:46.482 { 00:11:46.482 "name": "BaseBdev3", 00:11:46.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.482 "is_configured": false, 00:11:46.482 "data_offset": 0, 00:11:46.482 "data_size": 0 00:11:46.482 } 00:11:46.482 ] 00:11:46.482 }' 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.482 20:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.742 [2024-11-26 20:24:40.261569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.742 BaseBdev2 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.742 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.742 [ 00:11:46.742 { 00:11:46.742 "name": "BaseBdev2", 00:11:46.742 "aliases": [ 00:11:46.742 "46802afd-b05e-4181-a09b-6bc7d8abf353" 00:11:46.742 ], 00:11:46.742 "product_name": "Malloc disk", 00:11:46.742 "block_size": 512, 00:11:46.742 "num_blocks": 65536, 00:11:46.742 "uuid": "46802afd-b05e-4181-a09b-6bc7d8abf353", 00:11:46.742 "assigned_rate_limits": { 00:11:46.742 "rw_ios_per_sec": 0, 00:11:46.742 "rw_mbytes_per_sec": 0, 00:11:46.742 "r_mbytes_per_sec": 0, 00:11:46.742 "w_mbytes_per_sec": 0 00:11:46.742 }, 00:11:46.742 "claimed": true, 00:11:46.742 "claim_type": "exclusive_write", 00:11:47.000 "zoned": false, 00:11:47.000 "supported_io_types": { 00:11:47.000 "read": true, 00:11:47.000 "write": true, 00:11:47.000 "unmap": true, 00:11:47.000 "flush": true, 00:11:47.000 "reset": true, 00:11:47.000 "nvme_admin": false, 00:11:47.000 "nvme_io": false, 00:11:47.000 "nvme_io_md": false, 00:11:47.000 "write_zeroes": true, 00:11:47.000 "zcopy": true, 00:11:47.000 "get_zone_info": false, 00:11:47.000 "zone_management": false, 00:11:47.000 "zone_append": false, 00:11:47.000 "compare": false, 00:11:47.000 "compare_and_write": false, 00:11:47.000 "abort": true, 00:11:47.000 "seek_hole": false, 00:11:47.000 "seek_data": false, 00:11:47.000 "copy": true, 00:11:47.000 "nvme_iov_md": false 00:11:47.000 }, 00:11:47.000 "memory_domains": [ 00:11:47.000 { 00:11:47.000 "dma_device_id": "system", 00:11:47.000 "dma_device_type": 1 00:11:47.000 }, 00:11:47.000 { 00:11:47.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.000 "dma_device_type": 2 00:11:47.000 } 00:11:47.000 ], 00:11:47.000 "driver_specific": {} 00:11:47.000 } 00:11:47.000 ] 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.000 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.000 "name": "Existed_Raid", 00:11:47.000 "uuid": "1735c174-6e94-4e86-a663-494bb4c36d6d", 00:11:47.001 "strip_size_kb": 64, 00:11:47.001 "state": "configuring", 00:11:47.001 "raid_level": "raid0", 00:11:47.001 "superblock": true, 00:11:47.001 "num_base_bdevs": 3, 00:11:47.001 "num_base_bdevs_discovered": 2, 00:11:47.001 "num_base_bdevs_operational": 3, 00:11:47.001 "base_bdevs_list": [ 00:11:47.001 { 00:11:47.001 "name": "BaseBdev1", 00:11:47.001 "uuid": "a4ae06b6-9637-4178-9449-37f9dbec079b", 00:11:47.001 "is_configured": true, 00:11:47.001 "data_offset": 2048, 00:11:47.001 "data_size": 63488 00:11:47.001 }, 00:11:47.001 { 00:11:47.001 "name": "BaseBdev2", 00:11:47.001 "uuid": "46802afd-b05e-4181-a09b-6bc7d8abf353", 00:11:47.001 "is_configured": true, 00:11:47.001 "data_offset": 2048, 00:11:47.001 "data_size": 63488 00:11:47.001 }, 00:11:47.001 { 00:11:47.001 "name": "BaseBdev3", 00:11:47.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.001 "is_configured": false, 00:11:47.001 "data_offset": 0, 00:11:47.001 "data_size": 0 00:11:47.001 } 00:11:47.001 ] 00:11:47.001 }' 00:11:47.001 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.001 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.258 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.258 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.258 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.518 [2024-11-26 20:24:40.846391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.518 [2024-11-26 20:24:40.846811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.518 [2024-11-26 20:24:40.846881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:47.518 [2024-11-26 20:24:40.847214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:47.518 [2024-11-26 20:24:40.847467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.518 BaseBdev3 00:11:47.518 [2024-11-26 20:24:40.847518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:47.518 [2024-11-26 20:24:40.847740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.518 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.519 [ 00:11:47.519 { 00:11:47.519 "name": "BaseBdev3", 00:11:47.519 "aliases": [ 00:11:47.519 "6f4ba8e1-c1f0-4178-b917-ad68ff884b36" 00:11:47.519 ], 00:11:47.519 "product_name": "Malloc disk", 00:11:47.519 "block_size": 512, 00:11:47.519 "num_blocks": 65536, 00:11:47.519 "uuid": "6f4ba8e1-c1f0-4178-b917-ad68ff884b36", 00:11:47.519 "assigned_rate_limits": { 00:11:47.519 "rw_ios_per_sec": 0, 00:11:47.519 "rw_mbytes_per_sec": 0, 00:11:47.519 "r_mbytes_per_sec": 0, 00:11:47.519 "w_mbytes_per_sec": 0 00:11:47.519 }, 00:11:47.519 "claimed": true, 00:11:47.519 "claim_type": "exclusive_write", 00:11:47.519 "zoned": false, 00:11:47.519 "supported_io_types": { 00:11:47.519 "read": true, 00:11:47.519 "write": true, 00:11:47.519 "unmap": true, 00:11:47.519 "flush": true, 00:11:47.519 "reset": true, 00:11:47.519 "nvme_admin": false, 00:11:47.519 "nvme_io": false, 00:11:47.519 "nvme_io_md": false, 00:11:47.519 "write_zeroes": true, 00:11:47.519 "zcopy": true, 00:11:47.519 "get_zone_info": false, 00:11:47.519 "zone_management": false, 00:11:47.519 "zone_append": false, 00:11:47.519 "compare": false, 00:11:47.519 "compare_and_write": false, 00:11:47.519 "abort": true, 00:11:47.519 "seek_hole": false, 00:11:47.519 "seek_data": false, 00:11:47.519 "copy": true, 00:11:47.519 "nvme_iov_md": false 00:11:47.519 }, 00:11:47.519 "memory_domains": [ 00:11:47.519 { 00:11:47.519 "dma_device_id": "system", 00:11:47.519 "dma_device_type": 1 00:11:47.519 }, 00:11:47.519 { 00:11:47.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.519 "dma_device_type": 2 00:11:47.519 } 00:11:47.519 ], 00:11:47.519 "driver_specific": {} 00:11:47.519 } 00:11:47.519 ] 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.519 "name": "Existed_Raid", 00:11:47.519 "uuid": "1735c174-6e94-4e86-a663-494bb4c36d6d", 00:11:47.519 "strip_size_kb": 64, 00:11:47.519 "state": "online", 00:11:47.519 "raid_level": "raid0", 00:11:47.519 "superblock": true, 00:11:47.519 "num_base_bdevs": 3, 00:11:47.519 "num_base_bdevs_discovered": 3, 00:11:47.519 "num_base_bdevs_operational": 3, 00:11:47.519 "base_bdevs_list": [ 00:11:47.519 { 00:11:47.519 "name": "BaseBdev1", 00:11:47.519 "uuid": "a4ae06b6-9637-4178-9449-37f9dbec079b", 00:11:47.519 "is_configured": true, 00:11:47.519 "data_offset": 2048, 00:11:47.519 "data_size": 63488 00:11:47.519 }, 00:11:47.519 { 00:11:47.519 "name": "BaseBdev2", 00:11:47.519 "uuid": "46802afd-b05e-4181-a09b-6bc7d8abf353", 00:11:47.519 "is_configured": true, 00:11:47.519 "data_offset": 2048, 00:11:47.519 "data_size": 63488 00:11:47.519 }, 00:11:47.519 { 00:11:47.519 "name": "BaseBdev3", 00:11:47.519 "uuid": "6f4ba8e1-c1f0-4178-b917-ad68ff884b36", 00:11:47.519 "is_configured": true, 00:11:47.519 "data_offset": 2048, 00:11:47.519 "data_size": 63488 00:11:47.519 } 00:11:47.519 ] 00:11:47.519 }' 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.519 20:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.089 [2024-11-26 20:24:41.381980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.089 "name": "Existed_Raid", 00:11:48.089 "aliases": [ 00:11:48.089 "1735c174-6e94-4e86-a663-494bb4c36d6d" 00:11:48.089 ], 00:11:48.089 "product_name": "Raid Volume", 00:11:48.089 "block_size": 512, 00:11:48.089 "num_blocks": 190464, 00:11:48.089 "uuid": "1735c174-6e94-4e86-a663-494bb4c36d6d", 00:11:48.089 "assigned_rate_limits": { 00:11:48.089 "rw_ios_per_sec": 0, 00:11:48.089 "rw_mbytes_per_sec": 0, 00:11:48.089 "r_mbytes_per_sec": 0, 00:11:48.089 "w_mbytes_per_sec": 0 00:11:48.089 }, 00:11:48.089 "claimed": false, 00:11:48.089 "zoned": false, 00:11:48.089 "supported_io_types": { 00:11:48.089 "read": true, 00:11:48.089 "write": true, 00:11:48.089 "unmap": true, 00:11:48.089 "flush": true, 00:11:48.089 "reset": true, 00:11:48.089 "nvme_admin": false, 00:11:48.089 "nvme_io": false, 00:11:48.089 "nvme_io_md": false, 00:11:48.089 "write_zeroes": true, 00:11:48.089 "zcopy": false, 00:11:48.089 "get_zone_info": false, 00:11:48.089 "zone_management": false, 00:11:48.089 "zone_append": false, 00:11:48.089 "compare": false, 00:11:48.089 "compare_and_write": false, 00:11:48.089 "abort": false, 00:11:48.089 "seek_hole": false, 00:11:48.089 "seek_data": false, 00:11:48.089 "copy": false, 00:11:48.089 "nvme_iov_md": false 00:11:48.089 }, 00:11:48.089 "memory_domains": [ 00:11:48.089 { 00:11:48.089 "dma_device_id": "system", 00:11:48.089 "dma_device_type": 1 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.089 "dma_device_type": 2 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "dma_device_id": "system", 00:11:48.089 "dma_device_type": 1 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.089 "dma_device_type": 2 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "dma_device_id": "system", 00:11:48.089 "dma_device_type": 1 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.089 "dma_device_type": 2 00:11:48.089 } 00:11:48.089 ], 00:11:48.089 "driver_specific": { 00:11:48.089 "raid": { 00:11:48.089 "uuid": "1735c174-6e94-4e86-a663-494bb4c36d6d", 00:11:48.089 "strip_size_kb": 64, 00:11:48.089 "state": "online", 00:11:48.089 "raid_level": "raid0", 00:11:48.089 "superblock": true, 00:11:48.089 "num_base_bdevs": 3, 00:11:48.089 "num_base_bdevs_discovered": 3, 00:11:48.089 "num_base_bdevs_operational": 3, 00:11:48.089 "base_bdevs_list": [ 00:11:48.089 { 00:11:48.089 "name": "BaseBdev1", 00:11:48.089 "uuid": "a4ae06b6-9637-4178-9449-37f9dbec079b", 00:11:48.089 "is_configured": true, 00:11:48.089 "data_offset": 2048, 00:11:48.089 "data_size": 63488 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "name": "BaseBdev2", 00:11:48.089 "uuid": "46802afd-b05e-4181-a09b-6bc7d8abf353", 00:11:48.089 "is_configured": true, 00:11:48.089 "data_offset": 2048, 00:11:48.089 "data_size": 63488 00:11:48.089 }, 00:11:48.089 { 00:11:48.089 "name": "BaseBdev3", 00:11:48.089 "uuid": "6f4ba8e1-c1f0-4178-b917-ad68ff884b36", 00:11:48.089 "is_configured": true, 00:11:48.089 "data_offset": 2048, 00:11:48.089 "data_size": 63488 00:11:48.089 } 00:11:48.089 ] 00:11:48.089 } 00:11:48.089 } 00:11:48.089 }' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:48.089 BaseBdev2 00:11:48.089 BaseBdev3' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.089 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.090 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.349 [2024-11-26 20:24:41.685433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.349 [2024-11-26 20:24:41.685512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.349 [2024-11-26 20:24:41.685601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.349 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.349 "name": "Existed_Raid", 00:11:48.349 "uuid": "1735c174-6e94-4e86-a663-494bb4c36d6d", 00:11:48.349 "strip_size_kb": 64, 00:11:48.349 "state": "offline", 00:11:48.349 "raid_level": "raid0", 00:11:48.349 "superblock": true, 00:11:48.349 "num_base_bdevs": 3, 00:11:48.349 "num_base_bdevs_discovered": 2, 00:11:48.349 "num_base_bdevs_operational": 2, 00:11:48.349 "base_bdevs_list": [ 00:11:48.349 { 00:11:48.349 "name": null, 00:11:48.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.349 "is_configured": false, 00:11:48.349 "data_offset": 0, 00:11:48.349 "data_size": 63488 00:11:48.349 }, 00:11:48.350 { 00:11:48.350 "name": "BaseBdev2", 00:11:48.350 "uuid": "46802afd-b05e-4181-a09b-6bc7d8abf353", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "name": "BaseBdev3", 00:11:48.350 "uuid": "6f4ba8e1-c1f0-4178-b917-ad68ff884b36", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 } 00:11:48.350 ] 00:11:48.350 }' 00:11:48.350 20:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.350 20:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.918 [2024-11-26 20:24:42.294022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.918 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.918 [2024-11-26 20:24:42.467025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.918 [2024-11-26 20:24:42.467091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.178 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.179 BaseBdev2 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.179 [ 00:11:49.179 { 00:11:49.179 "name": "BaseBdev2", 00:11:49.179 "aliases": [ 00:11:49.179 "a1f49903-9900-4407-9925-38fa649c39f6" 00:11:49.179 ], 00:11:49.179 "product_name": "Malloc disk", 00:11:49.179 "block_size": 512, 00:11:49.179 "num_blocks": 65536, 00:11:49.179 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:49.179 "assigned_rate_limits": { 00:11:49.179 "rw_ios_per_sec": 0, 00:11:49.179 "rw_mbytes_per_sec": 0, 00:11:49.179 "r_mbytes_per_sec": 0, 00:11:49.179 "w_mbytes_per_sec": 0 00:11:49.179 }, 00:11:49.179 "claimed": false, 00:11:49.179 "zoned": false, 00:11:49.179 "supported_io_types": { 00:11:49.179 "read": true, 00:11:49.179 "write": true, 00:11:49.179 "unmap": true, 00:11:49.179 "flush": true, 00:11:49.179 "reset": true, 00:11:49.179 "nvme_admin": false, 00:11:49.179 "nvme_io": false, 00:11:49.179 "nvme_io_md": false, 00:11:49.179 "write_zeroes": true, 00:11:49.179 "zcopy": true, 00:11:49.179 "get_zone_info": false, 00:11:49.179 "zone_management": false, 00:11:49.179 "zone_append": false, 00:11:49.179 "compare": false, 00:11:49.179 "compare_and_write": false, 00:11:49.179 "abort": true, 00:11:49.179 "seek_hole": false, 00:11:49.179 "seek_data": false, 00:11:49.179 "copy": true, 00:11:49.179 "nvme_iov_md": false 00:11:49.179 }, 00:11:49.179 "memory_domains": [ 00:11:49.179 { 00:11:49.179 "dma_device_id": "system", 00:11:49.179 "dma_device_type": 1 00:11:49.179 }, 00:11:49.179 { 00:11:49.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.179 "dma_device_type": 2 00:11:49.179 } 00:11:49.179 ], 00:11:49.179 "driver_specific": {} 00:11:49.179 } 00:11:49.179 ] 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.179 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 BaseBdev3 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 [ 00:11:49.439 { 00:11:49.439 "name": "BaseBdev3", 00:11:49.439 "aliases": [ 00:11:49.439 "fad89075-0776-4ee5-b32a-ef09da779ae8" 00:11:49.439 ], 00:11:49.439 "product_name": "Malloc disk", 00:11:49.439 "block_size": 512, 00:11:49.439 "num_blocks": 65536, 00:11:49.439 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:49.439 "assigned_rate_limits": { 00:11:49.439 "rw_ios_per_sec": 0, 00:11:49.439 "rw_mbytes_per_sec": 0, 00:11:49.439 "r_mbytes_per_sec": 0, 00:11:49.439 "w_mbytes_per_sec": 0 00:11:49.439 }, 00:11:49.439 "claimed": false, 00:11:49.439 "zoned": false, 00:11:49.439 "supported_io_types": { 00:11:49.439 "read": true, 00:11:49.439 "write": true, 00:11:49.439 "unmap": true, 00:11:49.439 "flush": true, 00:11:49.439 "reset": true, 00:11:49.439 "nvme_admin": false, 00:11:49.439 "nvme_io": false, 00:11:49.439 "nvme_io_md": false, 00:11:49.439 "write_zeroes": true, 00:11:49.439 "zcopy": true, 00:11:49.439 "get_zone_info": false, 00:11:49.439 "zone_management": false, 00:11:49.439 "zone_append": false, 00:11:49.439 "compare": false, 00:11:49.439 "compare_and_write": false, 00:11:49.439 "abort": true, 00:11:49.439 "seek_hole": false, 00:11:49.439 "seek_data": false, 00:11:49.439 "copy": true, 00:11:49.439 "nvme_iov_md": false 00:11:49.439 }, 00:11:49.439 "memory_domains": [ 00:11:49.439 { 00:11:49.439 "dma_device_id": "system", 00:11:49.439 "dma_device_type": 1 00:11:49.439 }, 00:11:49.439 { 00:11:49.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.439 "dma_device_type": 2 00:11:49.439 } 00:11:49.439 ], 00:11:49.439 "driver_specific": {} 00:11:49.439 } 00:11:49.439 ] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 [2024-11-26 20:24:42.829088] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.439 [2024-11-26 20:24:42.829273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.439 [2024-11-26 20:24:42.829345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.439 [2024-11-26 20:24:42.831657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.439 "name": "Existed_Raid", 00:11:49.439 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:49.439 "strip_size_kb": 64, 00:11:49.439 "state": "configuring", 00:11:49.439 "raid_level": "raid0", 00:11:49.439 "superblock": true, 00:11:49.439 "num_base_bdevs": 3, 00:11:49.439 "num_base_bdevs_discovered": 2, 00:11:49.439 "num_base_bdevs_operational": 3, 00:11:49.439 "base_bdevs_list": [ 00:11:49.439 { 00:11:49.439 "name": "BaseBdev1", 00:11:49.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.439 "is_configured": false, 00:11:49.439 "data_offset": 0, 00:11:49.439 "data_size": 0 00:11:49.439 }, 00:11:49.439 { 00:11:49.439 "name": "BaseBdev2", 00:11:49.439 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:49.439 "is_configured": true, 00:11:49.439 "data_offset": 2048, 00:11:49.439 "data_size": 63488 00:11:49.439 }, 00:11:49.439 { 00:11:49.439 "name": "BaseBdev3", 00:11:49.439 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:49.439 "is_configured": true, 00:11:49.439 "data_offset": 2048, 00:11:49.439 "data_size": 63488 00:11:49.439 } 00:11:49.439 ] 00:11:49.439 }' 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.439 20:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 [2024-11-26 20:24:43.316450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.007 "name": "Existed_Raid", 00:11:50.007 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:50.007 "strip_size_kb": 64, 00:11:50.007 "state": "configuring", 00:11:50.007 "raid_level": "raid0", 00:11:50.007 "superblock": true, 00:11:50.007 "num_base_bdevs": 3, 00:11:50.007 "num_base_bdevs_discovered": 1, 00:11:50.007 "num_base_bdevs_operational": 3, 00:11:50.007 "base_bdevs_list": [ 00:11:50.007 { 00:11:50.007 "name": "BaseBdev1", 00:11:50.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.007 "is_configured": false, 00:11:50.007 "data_offset": 0, 00:11:50.007 "data_size": 0 00:11:50.007 }, 00:11:50.007 { 00:11:50.007 "name": null, 00:11:50.007 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:50.007 "is_configured": false, 00:11:50.007 "data_offset": 0, 00:11:50.007 "data_size": 63488 00:11:50.007 }, 00:11:50.007 { 00:11:50.007 "name": "BaseBdev3", 00:11:50.007 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:50.007 "is_configured": true, 00:11:50.007 "data_offset": 2048, 00:11:50.007 "data_size": 63488 00:11:50.007 } 00:11:50.007 ] 00:11:50.007 }' 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.007 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.266 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.266 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.266 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.266 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.266 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 [2024-11-26 20:24:43.876321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.525 BaseBdev1 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 [ 00:11:50.525 { 00:11:50.525 "name": "BaseBdev1", 00:11:50.525 "aliases": [ 00:11:50.525 "7c3a56be-6e0c-46e7-b4bf-81267d5fa997" 00:11:50.525 ], 00:11:50.525 "product_name": "Malloc disk", 00:11:50.525 "block_size": 512, 00:11:50.525 "num_blocks": 65536, 00:11:50.525 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:50.525 "assigned_rate_limits": { 00:11:50.525 "rw_ios_per_sec": 0, 00:11:50.525 "rw_mbytes_per_sec": 0, 00:11:50.525 "r_mbytes_per_sec": 0, 00:11:50.525 "w_mbytes_per_sec": 0 00:11:50.525 }, 00:11:50.525 "claimed": true, 00:11:50.525 "claim_type": "exclusive_write", 00:11:50.525 "zoned": false, 00:11:50.525 "supported_io_types": { 00:11:50.525 "read": true, 00:11:50.525 "write": true, 00:11:50.525 "unmap": true, 00:11:50.525 "flush": true, 00:11:50.525 "reset": true, 00:11:50.525 "nvme_admin": false, 00:11:50.525 "nvme_io": false, 00:11:50.525 "nvme_io_md": false, 00:11:50.525 "write_zeroes": true, 00:11:50.525 "zcopy": true, 00:11:50.525 "get_zone_info": false, 00:11:50.525 "zone_management": false, 00:11:50.525 "zone_append": false, 00:11:50.525 "compare": false, 00:11:50.525 "compare_and_write": false, 00:11:50.525 "abort": true, 00:11:50.525 "seek_hole": false, 00:11:50.525 "seek_data": false, 00:11:50.525 "copy": true, 00:11:50.525 "nvme_iov_md": false 00:11:50.525 }, 00:11:50.525 "memory_domains": [ 00:11:50.525 { 00:11:50.525 "dma_device_id": "system", 00:11:50.525 "dma_device_type": 1 00:11:50.525 }, 00:11:50.525 { 00:11:50.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.525 "dma_device_type": 2 00:11:50.525 } 00:11:50.525 ], 00:11:50.525 "driver_specific": {} 00:11:50.525 } 00:11:50.525 ] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.525 "name": "Existed_Raid", 00:11:50.525 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:50.525 "strip_size_kb": 64, 00:11:50.525 "state": "configuring", 00:11:50.525 "raid_level": "raid0", 00:11:50.525 "superblock": true, 00:11:50.525 "num_base_bdevs": 3, 00:11:50.525 "num_base_bdevs_discovered": 2, 00:11:50.525 "num_base_bdevs_operational": 3, 00:11:50.525 "base_bdevs_list": [ 00:11:50.525 { 00:11:50.525 "name": "BaseBdev1", 00:11:50.525 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:50.525 "is_configured": true, 00:11:50.525 "data_offset": 2048, 00:11:50.525 "data_size": 63488 00:11:50.525 }, 00:11:50.525 { 00:11:50.525 "name": null, 00:11:50.525 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:50.525 "is_configured": false, 00:11:50.525 "data_offset": 0, 00:11:50.525 "data_size": 63488 00:11:50.525 }, 00:11:50.525 { 00:11:50.525 "name": "BaseBdev3", 00:11:50.525 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:50.525 "is_configured": true, 00:11:50.525 "data_offset": 2048, 00:11:50.525 "data_size": 63488 00:11:50.525 } 00:11:50.525 ] 00:11:50.525 }' 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.525 20:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 [2024-11-26 20:24:44.423444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.093 "name": "Existed_Raid", 00:11:51.093 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:51.093 "strip_size_kb": 64, 00:11:51.093 "state": "configuring", 00:11:51.093 "raid_level": "raid0", 00:11:51.093 "superblock": true, 00:11:51.093 "num_base_bdevs": 3, 00:11:51.093 "num_base_bdevs_discovered": 1, 00:11:51.093 "num_base_bdevs_operational": 3, 00:11:51.093 "base_bdevs_list": [ 00:11:51.093 { 00:11:51.093 "name": "BaseBdev1", 00:11:51.093 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:51.093 "is_configured": true, 00:11:51.093 "data_offset": 2048, 00:11:51.093 "data_size": 63488 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": null, 00:11:51.093 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:51.093 "is_configured": false, 00:11:51.093 "data_offset": 0, 00:11:51.093 "data_size": 63488 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": null, 00:11:51.093 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:51.093 "is_configured": false, 00:11:51.093 "data_offset": 0, 00:11:51.093 "data_size": 63488 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 }' 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.093 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.352 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.352 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.352 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.352 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.612 [2024-11-26 20:24:44.954625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.612 "name": "Existed_Raid", 00:11:51.612 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:51.612 "strip_size_kb": 64, 00:11:51.612 "state": "configuring", 00:11:51.612 "raid_level": "raid0", 00:11:51.612 "superblock": true, 00:11:51.612 "num_base_bdevs": 3, 00:11:51.612 "num_base_bdevs_discovered": 2, 00:11:51.612 "num_base_bdevs_operational": 3, 00:11:51.612 "base_bdevs_list": [ 00:11:51.612 { 00:11:51.612 "name": "BaseBdev1", 00:11:51.612 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:51.612 "is_configured": true, 00:11:51.612 "data_offset": 2048, 00:11:51.612 "data_size": 63488 00:11:51.612 }, 00:11:51.612 { 00:11:51.612 "name": null, 00:11:51.612 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:51.612 "is_configured": false, 00:11:51.612 "data_offset": 0, 00:11:51.612 "data_size": 63488 00:11:51.612 }, 00:11:51.612 { 00:11:51.612 "name": "BaseBdev3", 00:11:51.612 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:51.612 "is_configured": true, 00:11:51.612 "data_offset": 2048, 00:11:51.612 "data_size": 63488 00:11:51.612 } 00:11:51.612 ] 00:11:51.612 }' 00:11:51.612 20:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.612 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.871 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.871 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.871 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.871 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.130 [2024-11-26 20:24:45.469875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.130 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.131 "name": "Existed_Raid", 00:11:52.131 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:52.131 "strip_size_kb": 64, 00:11:52.131 "state": "configuring", 00:11:52.131 "raid_level": "raid0", 00:11:52.131 "superblock": true, 00:11:52.131 "num_base_bdevs": 3, 00:11:52.131 "num_base_bdevs_discovered": 1, 00:11:52.131 "num_base_bdevs_operational": 3, 00:11:52.131 "base_bdevs_list": [ 00:11:52.131 { 00:11:52.131 "name": null, 00:11:52.131 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:52.131 "is_configured": false, 00:11:52.131 "data_offset": 0, 00:11:52.131 "data_size": 63488 00:11:52.131 }, 00:11:52.131 { 00:11:52.131 "name": null, 00:11:52.131 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:52.131 "is_configured": false, 00:11:52.131 "data_offset": 0, 00:11:52.131 "data_size": 63488 00:11:52.131 }, 00:11:52.131 { 00:11:52.131 "name": "BaseBdev3", 00:11:52.131 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:52.131 "is_configured": true, 00:11:52.131 "data_offset": 2048, 00:11:52.131 "data_size": 63488 00:11:52.131 } 00:11:52.131 ] 00:11:52.131 }' 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.131 20:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.699 [2024-11-26 20:24:46.113625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.699 "name": "Existed_Raid", 00:11:52.699 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:52.699 "strip_size_kb": 64, 00:11:52.699 "state": "configuring", 00:11:52.699 "raid_level": "raid0", 00:11:52.699 "superblock": true, 00:11:52.699 "num_base_bdevs": 3, 00:11:52.699 "num_base_bdevs_discovered": 2, 00:11:52.699 "num_base_bdevs_operational": 3, 00:11:52.699 "base_bdevs_list": [ 00:11:52.699 { 00:11:52.699 "name": null, 00:11:52.699 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:52.699 "is_configured": false, 00:11:52.699 "data_offset": 0, 00:11:52.699 "data_size": 63488 00:11:52.699 }, 00:11:52.699 { 00:11:52.699 "name": "BaseBdev2", 00:11:52.699 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:52.699 "is_configured": true, 00:11:52.699 "data_offset": 2048, 00:11:52.699 "data_size": 63488 00:11:52.699 }, 00:11:52.699 { 00:11:52.699 "name": "BaseBdev3", 00:11:52.699 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:52.699 "is_configured": true, 00:11:52.699 "data_offset": 2048, 00:11:52.699 "data_size": 63488 00:11:52.699 } 00:11:52.699 ] 00:11:52.699 }' 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.699 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7c3a56be-6e0c-46e7-b4bf-81267d5fa997 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 [2024-11-26 20:24:46.740273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:53.268 [2024-11-26 20:24:46.740648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:53.268 [2024-11-26 20:24:46.740713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:53.268 [2024-11-26 20:24:46.741022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:53.268 [2024-11-26 20:24:46.741261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:53.268 [2024-11-26 20:24:46.741311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:53.268 NewBaseBdev 00:11:53.268 [2024-11-26 20:24:46.741526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:53.268 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.269 [ 00:11:53.269 { 00:11:53.269 "name": "NewBaseBdev", 00:11:53.269 "aliases": [ 00:11:53.269 "7c3a56be-6e0c-46e7-b4bf-81267d5fa997" 00:11:53.269 ], 00:11:53.269 "product_name": "Malloc disk", 00:11:53.269 "block_size": 512, 00:11:53.269 "num_blocks": 65536, 00:11:53.269 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:53.269 "assigned_rate_limits": { 00:11:53.269 "rw_ios_per_sec": 0, 00:11:53.269 "rw_mbytes_per_sec": 0, 00:11:53.269 "r_mbytes_per_sec": 0, 00:11:53.269 "w_mbytes_per_sec": 0 00:11:53.269 }, 00:11:53.269 "claimed": true, 00:11:53.269 "claim_type": "exclusive_write", 00:11:53.269 "zoned": false, 00:11:53.269 "supported_io_types": { 00:11:53.269 "read": true, 00:11:53.269 "write": true, 00:11:53.269 "unmap": true, 00:11:53.269 "flush": true, 00:11:53.269 "reset": true, 00:11:53.269 "nvme_admin": false, 00:11:53.269 "nvme_io": false, 00:11:53.269 "nvme_io_md": false, 00:11:53.269 "write_zeroes": true, 00:11:53.269 "zcopy": true, 00:11:53.269 "get_zone_info": false, 00:11:53.269 "zone_management": false, 00:11:53.269 "zone_append": false, 00:11:53.269 "compare": false, 00:11:53.269 "compare_and_write": false, 00:11:53.269 "abort": true, 00:11:53.269 "seek_hole": false, 00:11:53.269 "seek_data": false, 00:11:53.269 "copy": true, 00:11:53.269 "nvme_iov_md": false 00:11:53.269 }, 00:11:53.269 "memory_domains": [ 00:11:53.269 { 00:11:53.269 "dma_device_id": "system", 00:11:53.269 "dma_device_type": 1 00:11:53.269 }, 00:11:53.269 { 00:11:53.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.269 "dma_device_type": 2 00:11:53.269 } 00:11:53.269 ], 00:11:53.269 "driver_specific": {} 00:11:53.269 } 00:11:53.269 ] 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.269 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.528 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.528 "name": "Existed_Raid", 00:11:53.528 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:53.528 "strip_size_kb": 64, 00:11:53.528 "state": "online", 00:11:53.528 "raid_level": "raid0", 00:11:53.528 "superblock": true, 00:11:53.528 "num_base_bdevs": 3, 00:11:53.528 "num_base_bdevs_discovered": 3, 00:11:53.528 "num_base_bdevs_operational": 3, 00:11:53.528 "base_bdevs_list": [ 00:11:53.528 { 00:11:53.528 "name": "NewBaseBdev", 00:11:53.528 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:53.528 "is_configured": true, 00:11:53.528 "data_offset": 2048, 00:11:53.528 "data_size": 63488 00:11:53.528 }, 00:11:53.528 { 00:11:53.528 "name": "BaseBdev2", 00:11:53.528 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:53.528 "is_configured": true, 00:11:53.528 "data_offset": 2048, 00:11:53.528 "data_size": 63488 00:11:53.528 }, 00:11:53.528 { 00:11:53.528 "name": "BaseBdev3", 00:11:53.528 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:53.528 "is_configured": true, 00:11:53.528 "data_offset": 2048, 00:11:53.528 "data_size": 63488 00:11:53.528 } 00:11:53.528 ] 00:11:53.528 }' 00:11:53.528 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.528 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.788 [2024-11-26 20:24:47.271794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.788 "name": "Existed_Raid", 00:11:53.788 "aliases": [ 00:11:53.788 "64004643-705d-44dd-b8ce-171d2d54d53b" 00:11:53.788 ], 00:11:53.788 "product_name": "Raid Volume", 00:11:53.788 "block_size": 512, 00:11:53.788 "num_blocks": 190464, 00:11:53.788 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:53.788 "assigned_rate_limits": { 00:11:53.788 "rw_ios_per_sec": 0, 00:11:53.788 "rw_mbytes_per_sec": 0, 00:11:53.788 "r_mbytes_per_sec": 0, 00:11:53.788 "w_mbytes_per_sec": 0 00:11:53.788 }, 00:11:53.788 "claimed": false, 00:11:53.788 "zoned": false, 00:11:53.788 "supported_io_types": { 00:11:53.788 "read": true, 00:11:53.788 "write": true, 00:11:53.788 "unmap": true, 00:11:53.788 "flush": true, 00:11:53.788 "reset": true, 00:11:53.788 "nvme_admin": false, 00:11:53.788 "nvme_io": false, 00:11:53.788 "nvme_io_md": false, 00:11:53.788 "write_zeroes": true, 00:11:53.788 "zcopy": false, 00:11:53.788 "get_zone_info": false, 00:11:53.788 "zone_management": false, 00:11:53.788 "zone_append": false, 00:11:53.788 "compare": false, 00:11:53.788 "compare_and_write": false, 00:11:53.788 "abort": false, 00:11:53.788 "seek_hole": false, 00:11:53.788 "seek_data": false, 00:11:53.788 "copy": false, 00:11:53.788 "nvme_iov_md": false 00:11:53.788 }, 00:11:53.788 "memory_domains": [ 00:11:53.788 { 00:11:53.788 "dma_device_id": "system", 00:11:53.788 "dma_device_type": 1 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.788 "dma_device_type": 2 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "dma_device_id": "system", 00:11:53.788 "dma_device_type": 1 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.788 "dma_device_type": 2 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "dma_device_id": "system", 00:11:53.788 "dma_device_type": 1 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.788 "dma_device_type": 2 00:11:53.788 } 00:11:53.788 ], 00:11:53.788 "driver_specific": { 00:11:53.788 "raid": { 00:11:53.788 "uuid": "64004643-705d-44dd-b8ce-171d2d54d53b", 00:11:53.788 "strip_size_kb": 64, 00:11:53.788 "state": "online", 00:11:53.788 "raid_level": "raid0", 00:11:53.788 "superblock": true, 00:11:53.788 "num_base_bdevs": 3, 00:11:53.788 "num_base_bdevs_discovered": 3, 00:11:53.788 "num_base_bdevs_operational": 3, 00:11:53.788 "base_bdevs_list": [ 00:11:53.788 { 00:11:53.788 "name": "NewBaseBdev", 00:11:53.788 "uuid": "7c3a56be-6e0c-46e7-b4bf-81267d5fa997", 00:11:53.788 "is_configured": true, 00:11:53.788 "data_offset": 2048, 00:11:53.788 "data_size": 63488 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "name": "BaseBdev2", 00:11:53.788 "uuid": "a1f49903-9900-4407-9925-38fa649c39f6", 00:11:53.788 "is_configured": true, 00:11:53.788 "data_offset": 2048, 00:11:53.788 "data_size": 63488 00:11:53.788 }, 00:11:53.788 { 00:11:53.788 "name": "BaseBdev3", 00:11:53.788 "uuid": "fad89075-0776-4ee5-b32a-ef09da779ae8", 00:11:53.788 "is_configured": true, 00:11:53.788 "data_offset": 2048, 00:11:53.788 "data_size": 63488 00:11:53.788 } 00:11:53.788 ] 00:11:53.788 } 00:11:53.788 } 00:11:53.788 }' 00:11:53.788 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:54.047 BaseBdev2 00:11:54.047 BaseBdev3' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.047 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.048 [2024-11-26 20:24:47.558971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.048 [2024-11-26 20:24:47.559005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.048 [2024-11-26 20:24:47.559105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.048 [2024-11-26 20:24:47.559169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.048 [2024-11-26 20:24:47.559184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64730 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64730 ']' 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64730 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.048 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64730 00:11:54.308 killing process with pid 64730 00:11:54.308 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.308 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.308 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64730' 00:11:54.308 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64730 00:11:54.308 [2024-11-26 20:24:47.607987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.308 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64730 00:11:54.566 [2024-11-26 20:24:47.979672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.961 ************************************ 00:11:55.961 END TEST raid_state_function_test_sb 00:11:55.961 ************************************ 00:11:55.961 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:55.961 00:11:55.961 real 0m11.677s 00:11:55.961 user 0m18.430s 00:11:55.961 sys 0m1.986s 00:11:55.961 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.961 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.961 20:24:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:55.961 20:24:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.961 20:24:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.961 20:24:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.961 ************************************ 00:11:55.961 START TEST raid_superblock_test 00:11:55.961 ************************************ 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:55.961 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65361 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65361 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65361 ']' 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.962 20:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 [2024-11-26 20:24:49.523157] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:11:56.221 [2024-11-26 20:24:49.523356] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65361 ] 00:11:56.221 [2024-11-26 20:24:49.703136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.479 [2024-11-26 20:24:49.838843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.738 [2024-11-26 20:24:50.092722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.738 [2024-11-26 20:24:50.092777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.997 malloc1 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.997 [2024-11-26 20:24:50.520637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.997 [2024-11-26 20:24:50.520792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.997 [2024-11-26 20:24:50.520859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.997 [2024-11-26 20:24:50.520907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.997 [2024-11-26 20:24:50.523528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.997 [2024-11-26 20:24:50.523612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.997 pt1 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.997 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.256 malloc2 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.256 [2024-11-26 20:24:50.584678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.256 [2024-11-26 20:24:50.584758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.256 [2024-11-26 20:24:50.584791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.256 [2024-11-26 20:24:50.584801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.256 [2024-11-26 20:24:50.587331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.256 [2024-11-26 20:24:50.587369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.256 pt2 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.256 malloc3 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.256 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.256 [2024-11-26 20:24:50.662034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:57.256 [2024-11-26 20:24:50.662171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.256 [2024-11-26 20:24:50.662230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.257 [2024-11-26 20:24:50.662290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.257 [2024-11-26 20:24:50.664832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.257 [2024-11-26 20:24:50.664928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:57.257 pt3 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.257 [2024-11-26 20:24:50.674115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.257 [2024-11-26 20:24:50.676358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.257 [2024-11-26 20:24:50.676497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:57.257 [2024-11-26 20:24:50.676753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:57.257 [2024-11-26 20:24:50.676819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:57.257 [2024-11-26 20:24:50.677185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.257 [2024-11-26 20:24:50.677450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:57.257 [2024-11-26 20:24:50.677499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:57.257 [2024-11-26 20:24:50.677768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.257 "name": "raid_bdev1", 00:11:57.257 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:57.257 "strip_size_kb": 64, 00:11:57.257 "state": "online", 00:11:57.257 "raid_level": "raid0", 00:11:57.257 "superblock": true, 00:11:57.257 "num_base_bdevs": 3, 00:11:57.257 "num_base_bdevs_discovered": 3, 00:11:57.257 "num_base_bdevs_operational": 3, 00:11:57.257 "base_bdevs_list": [ 00:11:57.257 { 00:11:57.257 "name": "pt1", 00:11:57.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.257 "is_configured": true, 00:11:57.257 "data_offset": 2048, 00:11:57.257 "data_size": 63488 00:11:57.257 }, 00:11:57.257 { 00:11:57.257 "name": "pt2", 00:11:57.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.257 "is_configured": true, 00:11:57.257 "data_offset": 2048, 00:11:57.257 "data_size": 63488 00:11:57.257 }, 00:11:57.257 { 00:11:57.257 "name": "pt3", 00:11:57.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.257 "is_configured": true, 00:11:57.257 "data_offset": 2048, 00:11:57.257 "data_size": 63488 00:11:57.257 } 00:11:57.257 ] 00:11:57.257 }' 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.257 20:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.826 [2024-11-26 20:24:51.181574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.826 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.826 "name": "raid_bdev1", 00:11:57.826 "aliases": [ 00:11:57.826 "27b4668c-f036-4477-bac3-f6129c14c3c1" 00:11:57.826 ], 00:11:57.826 "product_name": "Raid Volume", 00:11:57.826 "block_size": 512, 00:11:57.826 "num_blocks": 190464, 00:11:57.826 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:57.826 "assigned_rate_limits": { 00:11:57.826 "rw_ios_per_sec": 0, 00:11:57.826 "rw_mbytes_per_sec": 0, 00:11:57.826 "r_mbytes_per_sec": 0, 00:11:57.826 "w_mbytes_per_sec": 0 00:11:57.826 }, 00:11:57.826 "claimed": false, 00:11:57.826 "zoned": false, 00:11:57.826 "supported_io_types": { 00:11:57.826 "read": true, 00:11:57.826 "write": true, 00:11:57.826 "unmap": true, 00:11:57.826 "flush": true, 00:11:57.826 "reset": true, 00:11:57.826 "nvme_admin": false, 00:11:57.826 "nvme_io": false, 00:11:57.826 "nvme_io_md": false, 00:11:57.826 "write_zeroes": true, 00:11:57.826 "zcopy": false, 00:11:57.826 "get_zone_info": false, 00:11:57.826 "zone_management": false, 00:11:57.826 "zone_append": false, 00:11:57.826 "compare": false, 00:11:57.826 "compare_and_write": false, 00:11:57.826 "abort": false, 00:11:57.826 "seek_hole": false, 00:11:57.826 "seek_data": false, 00:11:57.826 "copy": false, 00:11:57.826 "nvme_iov_md": false 00:11:57.826 }, 00:11:57.826 "memory_domains": [ 00:11:57.826 { 00:11:57.826 "dma_device_id": "system", 00:11:57.826 "dma_device_type": 1 00:11:57.826 }, 00:11:57.826 { 00:11:57.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.826 "dma_device_type": 2 00:11:57.826 }, 00:11:57.826 { 00:11:57.826 "dma_device_id": "system", 00:11:57.826 "dma_device_type": 1 00:11:57.826 }, 00:11:57.826 { 00:11:57.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.826 "dma_device_type": 2 00:11:57.826 }, 00:11:57.826 { 00:11:57.826 "dma_device_id": "system", 00:11:57.826 "dma_device_type": 1 00:11:57.826 }, 00:11:57.826 { 00:11:57.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.826 "dma_device_type": 2 00:11:57.826 } 00:11:57.826 ], 00:11:57.826 "driver_specific": { 00:11:57.826 "raid": { 00:11:57.826 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:57.826 "strip_size_kb": 64, 00:11:57.826 "state": "online", 00:11:57.827 "raid_level": "raid0", 00:11:57.827 "superblock": true, 00:11:57.827 "num_base_bdevs": 3, 00:11:57.827 "num_base_bdevs_discovered": 3, 00:11:57.827 "num_base_bdevs_operational": 3, 00:11:57.827 "base_bdevs_list": [ 00:11:57.827 { 00:11:57.827 "name": "pt1", 00:11:57.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.827 "is_configured": true, 00:11:57.827 "data_offset": 2048, 00:11:57.827 "data_size": 63488 00:11:57.827 }, 00:11:57.827 { 00:11:57.827 "name": "pt2", 00:11:57.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.827 "is_configured": true, 00:11:57.827 "data_offset": 2048, 00:11:57.827 "data_size": 63488 00:11:57.827 }, 00:11:57.827 { 00:11:57.827 "name": "pt3", 00:11:57.827 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.827 "is_configured": true, 00:11:57.827 "data_offset": 2048, 00:11:57.827 "data_size": 63488 00:11:57.827 } 00:11:57.827 ] 00:11:57.827 } 00:11:57.827 } 00:11:57.827 }' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:57.827 pt2 00:11:57.827 pt3' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.827 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:58.086 [2024-11-26 20:24:51.469034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27b4668c-f036-4477-bac3-f6129c14c3c1 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27b4668c-f036-4477-bac3-f6129c14c3c1 ']' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 [2024-11-26 20:24:51.500669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.086 [2024-11-26 20:24:51.500770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.086 [2024-11-26 20:24:51.500902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.086 [2024-11-26 20:24:51.501007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.086 [2024-11-26 20:24:51.501061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.086 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 [2024-11-26 20:24:51.656489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:58.346 [2024-11-26 20:24:51.658753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:58.346 [2024-11-26 20:24:51.658866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:58.346 [2024-11-26 20:24:51.658963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:58.346 [2024-11-26 20:24:51.659060] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:58.346 [2024-11-26 20:24:51.659085] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:58.346 [2024-11-26 20:24:51.659106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.346 [2024-11-26 20:24:51.659119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:58.346 request: 00:11:58.346 { 00:11:58.346 "name": "raid_bdev1", 00:11:58.346 "raid_level": "raid0", 00:11:58.346 "base_bdevs": [ 00:11:58.346 "malloc1", 00:11:58.346 "malloc2", 00:11:58.346 "malloc3" 00:11:58.346 ], 00:11:58.346 "strip_size_kb": 64, 00:11:58.346 "superblock": false, 00:11:58.346 "method": "bdev_raid_create", 00:11:58.346 "req_id": 1 00:11:58.346 } 00:11:58.346 Got JSON-RPC error response 00:11:58.346 response: 00:11:58.346 { 00:11:58.346 "code": -17, 00:11:58.346 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:58.346 } 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 [2024-11-26 20:24:51.724285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:58.346 [2024-11-26 20:24:51.724390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.346 [2024-11-26 20:24:51.724439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:58.346 [2024-11-26 20:24:51.724478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.346 [2024-11-26 20:24:51.727010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.346 [2024-11-26 20:24:51.727093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:58.346 [2024-11-26 20:24:51.727223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:58.346 [2024-11-26 20:24:51.727337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:58.346 pt1 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.346 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.346 "name": "raid_bdev1", 00:11:58.346 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:58.346 "strip_size_kb": 64, 00:11:58.346 "state": "configuring", 00:11:58.346 "raid_level": "raid0", 00:11:58.346 "superblock": true, 00:11:58.346 "num_base_bdevs": 3, 00:11:58.346 "num_base_bdevs_discovered": 1, 00:11:58.346 "num_base_bdevs_operational": 3, 00:11:58.346 "base_bdevs_list": [ 00:11:58.346 { 00:11:58.346 "name": "pt1", 00:11:58.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.346 "is_configured": true, 00:11:58.346 "data_offset": 2048, 00:11:58.346 "data_size": 63488 00:11:58.346 }, 00:11:58.346 { 00:11:58.346 "name": null, 00:11:58.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.347 "is_configured": false, 00:11:58.347 "data_offset": 2048, 00:11:58.347 "data_size": 63488 00:11:58.347 }, 00:11:58.347 { 00:11:58.347 "name": null, 00:11:58.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.347 "is_configured": false, 00:11:58.347 "data_offset": 2048, 00:11:58.347 "data_size": 63488 00:11:58.347 } 00:11:58.347 ] 00:11:58.347 }' 00:11:58.347 20:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.347 20:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.936 [2024-11-26 20:24:52.227460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.936 [2024-11-26 20:24:52.227605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.936 [2024-11-26 20:24:52.227670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:58.936 [2024-11-26 20:24:52.227712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.936 [2024-11-26 20:24:52.228274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.936 [2024-11-26 20:24:52.228344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.936 [2024-11-26 20:24:52.228484] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.936 [2024-11-26 20:24:52.228553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.936 pt2 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.936 [2024-11-26 20:24:52.239411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.936 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.936 "name": "raid_bdev1", 00:11:58.936 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:58.936 "strip_size_kb": 64, 00:11:58.936 "state": "configuring", 00:11:58.936 "raid_level": "raid0", 00:11:58.936 "superblock": true, 00:11:58.936 "num_base_bdevs": 3, 00:11:58.936 "num_base_bdevs_discovered": 1, 00:11:58.936 "num_base_bdevs_operational": 3, 00:11:58.936 "base_bdevs_list": [ 00:11:58.936 { 00:11:58.936 "name": "pt1", 00:11:58.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.936 "is_configured": true, 00:11:58.936 "data_offset": 2048, 00:11:58.936 "data_size": 63488 00:11:58.936 }, 00:11:58.936 { 00:11:58.936 "name": null, 00:11:58.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.936 "is_configured": false, 00:11:58.937 "data_offset": 0, 00:11:58.937 "data_size": 63488 00:11:58.937 }, 00:11:58.937 { 00:11:58.937 "name": null, 00:11:58.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.937 "is_configured": false, 00:11:58.937 "data_offset": 2048, 00:11:58.937 "data_size": 63488 00:11:58.937 } 00:11:58.937 ] 00:11:58.937 }' 00:11:58.937 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.937 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.195 [2024-11-26 20:24:52.670716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:59.195 [2024-11-26 20:24:52.670852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.195 [2024-11-26 20:24:52.670902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:59.195 [2024-11-26 20:24:52.670945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.195 [2024-11-26 20:24:52.671535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.195 [2024-11-26 20:24:52.671614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:59.195 [2024-11-26 20:24:52.671747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:59.195 [2024-11-26 20:24:52.671807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:59.195 pt2 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.195 [2024-11-26 20:24:52.682671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:59.195 [2024-11-26 20:24:52.682726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.195 [2024-11-26 20:24:52.682744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:59.195 [2024-11-26 20:24:52.682755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.195 [2024-11-26 20:24:52.683186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.195 [2024-11-26 20:24:52.683233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:59.195 [2024-11-26 20:24:52.683318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:59.195 [2024-11-26 20:24:52.683344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:59.195 [2024-11-26 20:24:52.683480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:59.195 [2024-11-26 20:24:52.683499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:59.195 [2024-11-26 20:24:52.683781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:59.195 [2024-11-26 20:24:52.683940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:59.195 [2024-11-26 20:24:52.683950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:59.195 [2024-11-26 20:24:52.684131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.195 pt3 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.195 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.196 "name": "raid_bdev1", 00:11:59.196 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:59.196 "strip_size_kb": 64, 00:11:59.196 "state": "online", 00:11:59.196 "raid_level": "raid0", 00:11:59.196 "superblock": true, 00:11:59.196 "num_base_bdevs": 3, 00:11:59.196 "num_base_bdevs_discovered": 3, 00:11:59.196 "num_base_bdevs_operational": 3, 00:11:59.196 "base_bdevs_list": [ 00:11:59.196 { 00:11:59.196 "name": "pt1", 00:11:59.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.196 "is_configured": true, 00:11:59.196 "data_offset": 2048, 00:11:59.196 "data_size": 63488 00:11:59.196 }, 00:11:59.196 { 00:11:59.196 "name": "pt2", 00:11:59.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.196 "is_configured": true, 00:11:59.196 "data_offset": 2048, 00:11:59.196 "data_size": 63488 00:11:59.196 }, 00:11:59.196 { 00:11:59.196 "name": "pt3", 00:11:59.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.196 "is_configured": true, 00:11:59.196 "data_offset": 2048, 00:11:59.196 "data_size": 63488 00:11:59.196 } 00:11:59.196 ] 00:11:59.196 }' 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.196 20:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.764 [2024-11-26 20:24:53.138369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.764 "name": "raid_bdev1", 00:11:59.764 "aliases": [ 00:11:59.764 "27b4668c-f036-4477-bac3-f6129c14c3c1" 00:11:59.764 ], 00:11:59.764 "product_name": "Raid Volume", 00:11:59.764 "block_size": 512, 00:11:59.764 "num_blocks": 190464, 00:11:59.764 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:59.764 "assigned_rate_limits": { 00:11:59.764 "rw_ios_per_sec": 0, 00:11:59.764 "rw_mbytes_per_sec": 0, 00:11:59.764 "r_mbytes_per_sec": 0, 00:11:59.764 "w_mbytes_per_sec": 0 00:11:59.764 }, 00:11:59.764 "claimed": false, 00:11:59.764 "zoned": false, 00:11:59.764 "supported_io_types": { 00:11:59.764 "read": true, 00:11:59.764 "write": true, 00:11:59.764 "unmap": true, 00:11:59.764 "flush": true, 00:11:59.764 "reset": true, 00:11:59.764 "nvme_admin": false, 00:11:59.764 "nvme_io": false, 00:11:59.764 "nvme_io_md": false, 00:11:59.764 "write_zeroes": true, 00:11:59.764 "zcopy": false, 00:11:59.764 "get_zone_info": false, 00:11:59.764 "zone_management": false, 00:11:59.764 "zone_append": false, 00:11:59.764 "compare": false, 00:11:59.764 "compare_and_write": false, 00:11:59.764 "abort": false, 00:11:59.764 "seek_hole": false, 00:11:59.764 "seek_data": false, 00:11:59.764 "copy": false, 00:11:59.764 "nvme_iov_md": false 00:11:59.764 }, 00:11:59.764 "memory_domains": [ 00:11:59.764 { 00:11:59.764 "dma_device_id": "system", 00:11:59.764 "dma_device_type": 1 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.764 "dma_device_type": 2 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "dma_device_id": "system", 00:11:59.764 "dma_device_type": 1 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.764 "dma_device_type": 2 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "dma_device_id": "system", 00:11:59.764 "dma_device_type": 1 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.764 "dma_device_type": 2 00:11:59.764 } 00:11:59.764 ], 00:11:59.764 "driver_specific": { 00:11:59.764 "raid": { 00:11:59.764 "uuid": "27b4668c-f036-4477-bac3-f6129c14c3c1", 00:11:59.764 "strip_size_kb": 64, 00:11:59.764 "state": "online", 00:11:59.764 "raid_level": "raid0", 00:11:59.764 "superblock": true, 00:11:59.764 "num_base_bdevs": 3, 00:11:59.764 "num_base_bdevs_discovered": 3, 00:11:59.764 "num_base_bdevs_operational": 3, 00:11:59.764 "base_bdevs_list": [ 00:11:59.764 { 00:11:59.764 "name": "pt1", 00:11:59.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.764 "is_configured": true, 00:11:59.764 "data_offset": 2048, 00:11:59.764 "data_size": 63488 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "name": "pt2", 00:11:59.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.764 "is_configured": true, 00:11:59.764 "data_offset": 2048, 00:11:59.764 "data_size": 63488 00:11:59.764 }, 00:11:59.764 { 00:11:59.764 "name": "pt3", 00:11:59.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.764 "is_configured": true, 00:11:59.764 "data_offset": 2048, 00:11:59.764 "data_size": 63488 00:11:59.764 } 00:11:59.764 ] 00:11:59.764 } 00:11:59.764 } 00:11:59.764 }' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:59.764 pt2 00:11:59.764 pt3' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.764 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.024 [2024-11-26 20:24:53.413868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27b4668c-f036-4477-bac3-f6129c14c3c1 '!=' 27b4668c-f036-4477-bac3-f6129c14c3c1 ']' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65361 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65361 ']' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65361 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65361 00:12:00.024 killing process with pid 65361 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65361' 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65361 00:12:00.024 [2024-11-26 20:24:53.498504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.024 [2024-11-26 20:24:53.498618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.024 [2024-11-26 20:24:53.498685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.024 20:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65361 00:12:00.024 [2024-11-26 20:24:53.498699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:00.592 [2024-11-26 20:24:53.870409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.967 20:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:01.967 00:12:01.967 real 0m5.824s 00:12:01.967 user 0m8.292s 00:12:01.967 sys 0m0.925s 00:12:01.967 20:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.967 ************************************ 00:12:01.967 END TEST raid_superblock_test 00:12:01.967 ************************************ 00:12:01.967 20:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.967 20:24:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:01.967 20:24:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.967 20:24:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.967 20:24:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.967 ************************************ 00:12:01.967 START TEST raid_read_error_test 00:12:01.967 ************************************ 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XjFqTulKYE 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65625 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65625 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65625 ']' 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.967 20:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.967 [2024-11-26 20:24:55.424755] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:01.967 [2024-11-26 20:24:55.424984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65625 ] 00:12:02.225 [2024-11-26 20:24:55.605721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.225 [2024-11-26 20:24:55.741258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.483 [2024-11-26 20:24:55.971345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.483 [2024-11-26 20:24:55.971513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 BaseBdev1_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 true 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 [2024-11-26 20:24:56.433019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:03.050 [2024-11-26 20:24:56.433161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.050 [2024-11-26 20:24:56.433196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:03.050 [2024-11-26 20:24:56.433212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.050 [2024-11-26 20:24:56.435852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.050 [2024-11-26 20:24:56.435905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.050 BaseBdev1 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 BaseBdev2_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 true 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 [2024-11-26 20:24:56.508368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:03.050 [2024-11-26 20:24:56.508438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.050 [2024-11-26 20:24:56.508458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:03.050 [2024-11-26 20:24:56.508471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.050 [2024-11-26 20:24:56.510927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.050 [2024-11-26 20:24:56.511029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.050 BaseBdev2 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 BaseBdev3_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 true 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.050 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.050 [2024-11-26 20:24:56.597573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:03.050 [2024-11-26 20:24:56.597636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.050 [2024-11-26 20:24:56.597658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:03.050 [2024-11-26 20:24:56.597672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.050 [2024-11-26 20:24:56.600131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.050 [2024-11-26 20:24:56.600176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:03.308 BaseBdev3 00:12:03.308 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.308 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:03.308 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.308 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.308 [2024-11-26 20:24:56.609640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.308 [2024-11-26 20:24:56.611744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.309 [2024-11-26 20:24:56.611829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.309 [2024-11-26 20:24:56.612065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:03.309 [2024-11-26 20:24:56.612082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:03.309 [2024-11-26 20:24:56.612393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:03.309 [2024-11-26 20:24:56.612590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:03.309 [2024-11-26 20:24:56.612606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:03.309 [2024-11-26 20:24:56.612800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.309 "name": "raid_bdev1", 00:12:03.309 "uuid": "70fcd370-eb33-4fc6-b46e-273e6475b126", 00:12:03.309 "strip_size_kb": 64, 00:12:03.309 "state": "online", 00:12:03.309 "raid_level": "raid0", 00:12:03.309 "superblock": true, 00:12:03.309 "num_base_bdevs": 3, 00:12:03.309 "num_base_bdevs_discovered": 3, 00:12:03.309 "num_base_bdevs_operational": 3, 00:12:03.309 "base_bdevs_list": [ 00:12:03.309 { 00:12:03.309 "name": "BaseBdev1", 00:12:03.309 "uuid": "a75a99f5-d993-5d9a-a792-6bebadd0f751", 00:12:03.309 "is_configured": true, 00:12:03.309 "data_offset": 2048, 00:12:03.309 "data_size": 63488 00:12:03.309 }, 00:12:03.309 { 00:12:03.309 "name": "BaseBdev2", 00:12:03.309 "uuid": "2a1f379b-c46c-5bf0-9c0f-cf721e2df96c", 00:12:03.309 "is_configured": true, 00:12:03.309 "data_offset": 2048, 00:12:03.309 "data_size": 63488 00:12:03.309 }, 00:12:03.309 { 00:12:03.309 "name": "BaseBdev3", 00:12:03.309 "uuid": "55c0289c-271c-5f3a-b2a4-38fdb10c6f3d", 00:12:03.309 "is_configured": true, 00:12:03.309 "data_offset": 2048, 00:12:03.309 "data_size": 63488 00:12:03.309 } 00:12:03.309 ] 00:12:03.309 }' 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.309 20:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.568 20:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:03.568 20:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:03.826 [2024-11-26 20:24:57.210328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.761 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.762 "name": "raid_bdev1", 00:12:04.762 "uuid": "70fcd370-eb33-4fc6-b46e-273e6475b126", 00:12:04.762 "strip_size_kb": 64, 00:12:04.762 "state": "online", 00:12:04.762 "raid_level": "raid0", 00:12:04.762 "superblock": true, 00:12:04.762 "num_base_bdevs": 3, 00:12:04.762 "num_base_bdevs_discovered": 3, 00:12:04.762 "num_base_bdevs_operational": 3, 00:12:04.762 "base_bdevs_list": [ 00:12:04.762 { 00:12:04.762 "name": "BaseBdev1", 00:12:04.762 "uuid": "a75a99f5-d993-5d9a-a792-6bebadd0f751", 00:12:04.762 "is_configured": true, 00:12:04.762 "data_offset": 2048, 00:12:04.762 "data_size": 63488 00:12:04.762 }, 00:12:04.762 { 00:12:04.762 "name": "BaseBdev2", 00:12:04.762 "uuid": "2a1f379b-c46c-5bf0-9c0f-cf721e2df96c", 00:12:04.762 "is_configured": true, 00:12:04.762 "data_offset": 2048, 00:12:04.762 "data_size": 63488 00:12:04.762 }, 00:12:04.762 { 00:12:04.762 "name": "BaseBdev3", 00:12:04.762 "uuid": "55c0289c-271c-5f3a-b2a4-38fdb10c6f3d", 00:12:04.762 "is_configured": true, 00:12:04.762 "data_offset": 2048, 00:12:04.762 "data_size": 63488 00:12:04.762 } 00:12:04.762 ] 00:12:04.762 }' 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.762 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.019 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.019 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 [2024-11-26 20:24:58.503479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.019 [2024-11-26 20:24:58.503599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.019 { 00:12:05.019 "results": [ 00:12:05.019 { 00:12:05.019 "job": "raid_bdev1", 00:12:05.019 "core_mask": "0x1", 00:12:05.020 "workload": "randrw", 00:12:05.020 "percentage": 50, 00:12:05.020 "status": "finished", 00:12:05.020 "queue_depth": 1, 00:12:05.020 "io_size": 131072, 00:12:05.020 "runtime": 1.293552, 00:12:05.020 "iops": 13019.1905698418, 00:12:05.020 "mibps": 1627.398821230225, 00:12:05.020 "io_failed": 1, 00:12:05.020 "io_timeout": 0, 00:12:05.020 "avg_latency_us": 106.1766129488091, 00:12:05.020 "min_latency_us": 26.829694323144103, 00:12:05.020 "max_latency_us": 1645.5545851528384 00:12:05.020 } 00:12:05.020 ], 00:12:05.020 "core_count": 1 00:12:05.020 } 00:12:05.020 [2024-11-26 20:24:58.508716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.020 [2024-11-26 20:24:58.508846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.020 [2024-11-26 20:24:58.508913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.020 [2024-11-26 20:24:58.508929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65625 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65625 ']' 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65625 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65625 00:12:05.020 killing process with pid 65625 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65625' 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65625 00:12:05.020 20:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65625 00:12:05.020 [2024-11-26 20:24:58.545364] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.585 [2024-11-26 20:24:58.922083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XjFqTulKYE 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.957 ************************************ 00:12:06.957 END TEST raid_read_error_test 00:12:06.957 ************************************ 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:12:06.957 00:12:06.957 real 0m4.850s 00:12:06.957 user 0m5.785s 00:12:06.957 sys 0m0.573s 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.957 20:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.957 20:25:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:06.957 20:25:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:06.957 20:25:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.957 20:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.957 ************************************ 00:12:06.957 START TEST raid_write_error_test 00:12:06.957 ************************************ 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qTAU9vk68W 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65767 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65767 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65767 ']' 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.957 20:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.957 [2024-11-26 20:25:00.325618] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:06.957 [2024-11-26 20:25:00.325824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65767 ] 00:12:06.957 [2024-11-26 20:25:00.499968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.218 [2024-11-26 20:25:00.619197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.480 [2024-11-26 20:25:00.830421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.480 [2024-11-26 20:25:00.830576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.739 BaseBdev1_malloc 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.739 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.739 true 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.740 [2024-11-26 20:25:01.218733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:07.740 [2024-11-26 20:25:01.218843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.740 [2024-11-26 20:25:01.218886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:07.740 [2024-11-26 20:25:01.218917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.740 [2024-11-26 20:25:01.221168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.740 [2024-11-26 20:25:01.221264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.740 BaseBdev1 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.740 BaseBdev2_malloc 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.740 true 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.740 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.740 [2024-11-26 20:25:01.289351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:07.740 [2024-11-26 20:25:01.289480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.740 [2024-11-26 20:25:01.289545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:07.740 [2024-11-26 20:25:01.289593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.740 [2024-11-26 20:25:01.291920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.740 [2024-11-26 20:25:01.292010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:07.999 BaseBdev2 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.999 BaseBdev3_malloc 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.999 true 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.999 [2024-11-26 20:25:01.367298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:07.999 [2024-11-26 20:25:01.367350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.999 [2024-11-26 20:25:01.367368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:07.999 [2024-11-26 20:25:01.367378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.999 [2024-11-26 20:25:01.369571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.999 [2024-11-26 20:25:01.369663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:07.999 BaseBdev3 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.999 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.999 [2024-11-26 20:25:01.379378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.999 [2024-11-26 20:25:01.381355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.999 [2024-11-26 20:25:01.381435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.999 [2024-11-26 20:25:01.381658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:07.999 [2024-11-26 20:25:01.381680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:07.999 [2024-11-26 20:25:01.381972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:08.000 [2024-11-26 20:25:01.382143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:08.000 [2024-11-26 20:25:01.382157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:08.000 [2024-11-26 20:25:01.382320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.000 "name": "raid_bdev1", 00:12:08.000 "uuid": "114c8761-2c77-4469-8a99-941927e8b94c", 00:12:08.000 "strip_size_kb": 64, 00:12:08.000 "state": "online", 00:12:08.000 "raid_level": "raid0", 00:12:08.000 "superblock": true, 00:12:08.000 "num_base_bdevs": 3, 00:12:08.000 "num_base_bdevs_discovered": 3, 00:12:08.000 "num_base_bdevs_operational": 3, 00:12:08.000 "base_bdevs_list": [ 00:12:08.000 { 00:12:08.000 "name": "BaseBdev1", 00:12:08.000 "uuid": "52647ea6-b30d-5d85-ba8a-a7a9960ea744", 00:12:08.000 "is_configured": true, 00:12:08.000 "data_offset": 2048, 00:12:08.000 "data_size": 63488 00:12:08.000 }, 00:12:08.000 { 00:12:08.000 "name": "BaseBdev2", 00:12:08.000 "uuid": "86d98c9e-fa46-55a4-b6e0-c2461c32825e", 00:12:08.000 "is_configured": true, 00:12:08.000 "data_offset": 2048, 00:12:08.000 "data_size": 63488 00:12:08.000 }, 00:12:08.000 { 00:12:08.000 "name": "BaseBdev3", 00:12:08.000 "uuid": "4aaab2c5-b2cc-5258-a890-855162af0d80", 00:12:08.000 "is_configured": true, 00:12:08.000 "data_offset": 2048, 00:12:08.000 "data_size": 63488 00:12:08.000 } 00:12:08.000 ] 00:12:08.000 }' 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.000 20:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.258 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:08.259 20:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:08.518 [2024-11-26 20:25:01.863736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.452 "name": "raid_bdev1", 00:12:09.452 "uuid": "114c8761-2c77-4469-8a99-941927e8b94c", 00:12:09.452 "strip_size_kb": 64, 00:12:09.452 "state": "online", 00:12:09.452 "raid_level": "raid0", 00:12:09.452 "superblock": true, 00:12:09.452 "num_base_bdevs": 3, 00:12:09.452 "num_base_bdevs_discovered": 3, 00:12:09.452 "num_base_bdevs_operational": 3, 00:12:09.452 "base_bdevs_list": [ 00:12:09.452 { 00:12:09.452 "name": "BaseBdev1", 00:12:09.452 "uuid": "52647ea6-b30d-5d85-ba8a-a7a9960ea744", 00:12:09.452 "is_configured": true, 00:12:09.452 "data_offset": 2048, 00:12:09.452 "data_size": 63488 00:12:09.452 }, 00:12:09.452 { 00:12:09.452 "name": "BaseBdev2", 00:12:09.452 "uuid": "86d98c9e-fa46-55a4-b6e0-c2461c32825e", 00:12:09.452 "is_configured": true, 00:12:09.452 "data_offset": 2048, 00:12:09.452 "data_size": 63488 00:12:09.452 }, 00:12:09.452 { 00:12:09.452 "name": "BaseBdev3", 00:12:09.452 "uuid": "4aaab2c5-b2cc-5258-a890-855162af0d80", 00:12:09.452 "is_configured": true, 00:12:09.452 "data_offset": 2048, 00:12:09.452 "data_size": 63488 00:12:09.452 } 00:12:09.452 ] 00:12:09.452 }' 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.452 20:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.711 [2024-11-26 20:25:03.231929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.711 [2024-11-26 20:25:03.232011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.711 [2024-11-26 20:25:03.234915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.711 [2024-11-26 20:25:03.235002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.711 [2024-11-26 20:25:03.235087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.711 [2024-11-26 20:25:03.235136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65767 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65767 ']' 00:12:09.711 { 00:12:09.711 "results": [ 00:12:09.711 { 00:12:09.711 "job": "raid_bdev1", 00:12:09.711 "core_mask": "0x1", 00:12:09.711 "workload": "randrw", 00:12:09.711 "percentage": 50, 00:12:09.711 "status": "finished", 00:12:09.711 "queue_depth": 1, 00:12:09.711 "io_size": 131072, 00:12:09.711 "runtime": 1.36899, 00:12:09.711 "iops": 15411.361660786419, 00:12:09.711 "mibps": 1926.4202075983023, 00:12:09.711 "io_failed": 1, 00:12:09.711 "io_timeout": 0, 00:12:09.711 "avg_latency_us": 90.02666746142278, 00:12:09.711 "min_latency_us": 19.339737991266375, 00:12:09.711 "max_latency_us": 1552.5449781659388 00:12:09.711 } 00:12:09.711 ], 00:12:09.711 "core_count": 1 00:12:09.711 } 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65767 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.711 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65767 00:12:09.970 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.970 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.970 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65767' 00:12:09.970 killing process with pid 65767 00:12:09.970 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65767 00:12:09.970 [2024-11-26 20:25:03.269379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.970 20:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65767 00:12:09.970 [2024-11-26 20:25:03.509850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qTAU9vk68W 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:11.345 ************************************ 00:12:11.345 END TEST raid_write_error_test 00:12:11.345 ************************************ 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:11.345 00:12:11.345 real 0m4.527s 00:12:11.345 user 0m5.321s 00:12:11.345 sys 0m0.530s 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.345 20:25:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 20:25:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:11.345 20:25:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:11.345 20:25:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.345 20:25:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.345 20:25:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 ************************************ 00:12:11.345 START TEST raid_state_function_test 00:12:11.345 ************************************ 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65909 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65909' 00:12:11.345 Process raid pid: 65909 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65909 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65909 ']' 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.345 20:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.603 [2024-11-26 20:25:04.930203] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:11.603 [2024-11-26 20:25:04.930585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.603 [2024-11-26 20:25:05.114214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.861 [2024-11-26 20:25:05.239284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.119 [2024-11-26 20:25:05.455770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.119 [2024-11-26 20:25:05.455890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.379 [2024-11-26 20:25:05.786968] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.379 [2024-11-26 20:25:05.787026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.379 [2024-11-26 20:25:05.787037] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.379 [2024-11-26 20:25:05.787063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.379 [2024-11-26 20:25:05.787070] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.379 [2024-11-26 20:25:05.787079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.379 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.380 "name": "Existed_Raid", 00:12:12.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.380 "strip_size_kb": 64, 00:12:12.380 "state": "configuring", 00:12:12.380 "raid_level": "concat", 00:12:12.380 "superblock": false, 00:12:12.380 "num_base_bdevs": 3, 00:12:12.380 "num_base_bdevs_discovered": 0, 00:12:12.380 "num_base_bdevs_operational": 3, 00:12:12.380 "base_bdevs_list": [ 00:12:12.380 { 00:12:12.380 "name": "BaseBdev1", 00:12:12.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.380 "is_configured": false, 00:12:12.380 "data_offset": 0, 00:12:12.380 "data_size": 0 00:12:12.380 }, 00:12:12.380 { 00:12:12.380 "name": "BaseBdev2", 00:12:12.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.380 "is_configured": false, 00:12:12.380 "data_offset": 0, 00:12:12.380 "data_size": 0 00:12:12.380 }, 00:12:12.380 { 00:12:12.380 "name": "BaseBdev3", 00:12:12.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.380 "is_configured": false, 00:12:12.380 "data_offset": 0, 00:12:12.380 "data_size": 0 00:12:12.380 } 00:12:12.380 ] 00:12:12.380 }' 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.380 20:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 [2024-11-26 20:25:06.214180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.948 [2024-11-26 20:25:06.214279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 [2024-11-26 20:25:06.226172] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.948 [2024-11-26 20:25:06.226262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.948 [2024-11-26 20:25:06.226295] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.948 [2024-11-26 20:25:06.226319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.948 [2024-11-26 20:25:06.226401] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.948 [2024-11-26 20:25:06.226428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 [2024-11-26 20:25:06.274242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.948 BaseBdev1 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 [ 00:12:12.948 { 00:12:12.948 "name": "BaseBdev1", 00:12:12.948 "aliases": [ 00:12:12.948 "3b7b4c42-2e14-4791-a616-38f34c4e4c44" 00:12:12.948 ], 00:12:12.948 "product_name": "Malloc disk", 00:12:12.948 "block_size": 512, 00:12:12.948 "num_blocks": 65536, 00:12:12.948 "uuid": "3b7b4c42-2e14-4791-a616-38f34c4e4c44", 00:12:12.948 "assigned_rate_limits": { 00:12:12.948 "rw_ios_per_sec": 0, 00:12:12.948 "rw_mbytes_per_sec": 0, 00:12:12.948 "r_mbytes_per_sec": 0, 00:12:12.948 "w_mbytes_per_sec": 0 00:12:12.948 }, 00:12:12.948 "claimed": true, 00:12:12.948 "claim_type": "exclusive_write", 00:12:12.948 "zoned": false, 00:12:12.948 "supported_io_types": { 00:12:12.948 "read": true, 00:12:12.948 "write": true, 00:12:12.948 "unmap": true, 00:12:12.948 "flush": true, 00:12:12.948 "reset": true, 00:12:12.948 "nvme_admin": false, 00:12:12.948 "nvme_io": false, 00:12:12.948 "nvme_io_md": false, 00:12:12.948 "write_zeroes": true, 00:12:12.948 "zcopy": true, 00:12:12.948 "get_zone_info": false, 00:12:12.948 "zone_management": false, 00:12:12.948 "zone_append": false, 00:12:12.948 "compare": false, 00:12:12.948 "compare_and_write": false, 00:12:12.948 "abort": true, 00:12:12.948 "seek_hole": false, 00:12:12.948 "seek_data": false, 00:12:12.948 "copy": true, 00:12:12.948 "nvme_iov_md": false 00:12:12.948 }, 00:12:12.948 "memory_domains": [ 00:12:12.948 { 00:12:12.948 "dma_device_id": "system", 00:12:12.948 "dma_device_type": 1 00:12:12.948 }, 00:12:12.948 { 00:12:12.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.948 "dma_device_type": 2 00:12:12.948 } 00:12:12.948 ], 00:12:12.948 "driver_specific": {} 00:12:12.948 } 00:12:12.948 ] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.948 "name": "Existed_Raid", 00:12:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.948 "strip_size_kb": 64, 00:12:12.948 "state": "configuring", 00:12:12.948 "raid_level": "concat", 00:12:12.948 "superblock": false, 00:12:12.948 "num_base_bdevs": 3, 00:12:12.948 "num_base_bdevs_discovered": 1, 00:12:12.948 "num_base_bdevs_operational": 3, 00:12:12.948 "base_bdevs_list": [ 00:12:12.948 { 00:12:12.948 "name": "BaseBdev1", 00:12:12.948 "uuid": "3b7b4c42-2e14-4791-a616-38f34c4e4c44", 00:12:12.948 "is_configured": true, 00:12:12.948 "data_offset": 0, 00:12:12.948 "data_size": 65536 00:12:12.948 }, 00:12:12.948 { 00:12:12.948 "name": "BaseBdev2", 00:12:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.948 "is_configured": false, 00:12:12.948 "data_offset": 0, 00:12:12.948 "data_size": 0 00:12:12.948 }, 00:12:12.948 { 00:12:12.948 "name": "BaseBdev3", 00:12:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.948 "is_configured": false, 00:12:12.948 "data_offset": 0, 00:12:12.948 "data_size": 0 00:12:12.948 } 00:12:12.948 ] 00:12:12.948 }' 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.948 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.209 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.209 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.209 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.468 [2024-11-26 20:25:06.761504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.468 [2024-11-26 20:25:06.761565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.468 [2024-11-26 20:25:06.773529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.468 [2024-11-26 20:25:06.775622] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.468 [2024-11-26 20:25:06.775719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.468 [2024-11-26 20:25:06.775754] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.468 [2024-11-26 20:25:06.775781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.468 "name": "Existed_Raid", 00:12:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.468 "strip_size_kb": 64, 00:12:13.468 "state": "configuring", 00:12:13.468 "raid_level": "concat", 00:12:13.468 "superblock": false, 00:12:13.468 "num_base_bdevs": 3, 00:12:13.468 "num_base_bdevs_discovered": 1, 00:12:13.468 "num_base_bdevs_operational": 3, 00:12:13.468 "base_bdevs_list": [ 00:12:13.468 { 00:12:13.468 "name": "BaseBdev1", 00:12:13.468 "uuid": "3b7b4c42-2e14-4791-a616-38f34c4e4c44", 00:12:13.468 "is_configured": true, 00:12:13.468 "data_offset": 0, 00:12:13.468 "data_size": 65536 00:12:13.468 }, 00:12:13.468 { 00:12:13.468 "name": "BaseBdev2", 00:12:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.468 "is_configured": false, 00:12:13.468 "data_offset": 0, 00:12:13.468 "data_size": 0 00:12:13.468 }, 00:12:13.468 { 00:12:13.468 "name": "BaseBdev3", 00:12:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.468 "is_configured": false, 00:12:13.468 "data_offset": 0, 00:12:13.468 "data_size": 0 00:12:13.468 } 00:12:13.468 ] 00:12:13.468 }' 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.468 20:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.728 [2024-11-26 20:25:07.256848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.728 BaseBdev2 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.728 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.987 [ 00:12:13.987 { 00:12:13.987 "name": "BaseBdev2", 00:12:13.987 "aliases": [ 00:12:13.987 "9c40e515-68f5-483a-b059-0fa3eb5917ce" 00:12:13.987 ], 00:12:13.987 "product_name": "Malloc disk", 00:12:13.987 "block_size": 512, 00:12:13.987 "num_blocks": 65536, 00:12:13.987 "uuid": "9c40e515-68f5-483a-b059-0fa3eb5917ce", 00:12:13.987 "assigned_rate_limits": { 00:12:13.987 "rw_ios_per_sec": 0, 00:12:13.987 "rw_mbytes_per_sec": 0, 00:12:13.987 "r_mbytes_per_sec": 0, 00:12:13.987 "w_mbytes_per_sec": 0 00:12:13.987 }, 00:12:13.987 "claimed": true, 00:12:13.987 "claim_type": "exclusive_write", 00:12:13.987 "zoned": false, 00:12:13.987 "supported_io_types": { 00:12:13.987 "read": true, 00:12:13.987 "write": true, 00:12:13.987 "unmap": true, 00:12:13.987 "flush": true, 00:12:13.987 "reset": true, 00:12:13.987 "nvme_admin": false, 00:12:13.987 "nvme_io": false, 00:12:13.987 "nvme_io_md": false, 00:12:13.987 "write_zeroes": true, 00:12:13.987 "zcopy": true, 00:12:13.987 "get_zone_info": false, 00:12:13.987 "zone_management": false, 00:12:13.987 "zone_append": false, 00:12:13.987 "compare": false, 00:12:13.987 "compare_and_write": false, 00:12:13.987 "abort": true, 00:12:13.987 "seek_hole": false, 00:12:13.987 "seek_data": false, 00:12:13.987 "copy": true, 00:12:13.987 "nvme_iov_md": false 00:12:13.987 }, 00:12:13.987 "memory_domains": [ 00:12:13.987 { 00:12:13.987 "dma_device_id": "system", 00:12:13.987 "dma_device_type": 1 00:12:13.987 }, 00:12:13.987 { 00:12:13.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.987 "dma_device_type": 2 00:12:13.987 } 00:12:13.987 ], 00:12:13.987 "driver_specific": {} 00:12:13.987 } 00:12:13.987 ] 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.987 "name": "Existed_Raid", 00:12:13.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.987 "strip_size_kb": 64, 00:12:13.987 "state": "configuring", 00:12:13.987 "raid_level": "concat", 00:12:13.987 "superblock": false, 00:12:13.987 "num_base_bdevs": 3, 00:12:13.987 "num_base_bdevs_discovered": 2, 00:12:13.987 "num_base_bdevs_operational": 3, 00:12:13.987 "base_bdevs_list": [ 00:12:13.987 { 00:12:13.987 "name": "BaseBdev1", 00:12:13.987 "uuid": "3b7b4c42-2e14-4791-a616-38f34c4e4c44", 00:12:13.987 "is_configured": true, 00:12:13.987 "data_offset": 0, 00:12:13.987 "data_size": 65536 00:12:13.987 }, 00:12:13.987 { 00:12:13.987 "name": "BaseBdev2", 00:12:13.987 "uuid": "9c40e515-68f5-483a-b059-0fa3eb5917ce", 00:12:13.987 "is_configured": true, 00:12:13.987 "data_offset": 0, 00:12:13.987 "data_size": 65536 00:12:13.987 }, 00:12:13.987 { 00:12:13.987 "name": "BaseBdev3", 00:12:13.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.987 "is_configured": false, 00:12:13.987 "data_offset": 0, 00:12:13.987 "data_size": 0 00:12:13.987 } 00:12:13.987 ] 00:12:13.987 }' 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.987 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 [2024-11-26 20:25:07.755553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.248 [2024-11-26 20:25:07.755693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.248 [2024-11-26 20:25:07.755724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:14.248 [2024-11-26 20:25:07.756039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:14.248 [2024-11-26 20:25:07.756276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.248 [2024-11-26 20:25:07.756322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:14.248 [2024-11-26 20:25:07.756627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.248 BaseBdev3 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 [ 00:12:14.248 { 00:12:14.248 "name": "BaseBdev3", 00:12:14.248 "aliases": [ 00:12:14.248 "91d90663-13ed-45e2-bfee-47f03cd901b6" 00:12:14.248 ], 00:12:14.248 "product_name": "Malloc disk", 00:12:14.248 "block_size": 512, 00:12:14.248 "num_blocks": 65536, 00:12:14.248 "uuid": "91d90663-13ed-45e2-bfee-47f03cd901b6", 00:12:14.248 "assigned_rate_limits": { 00:12:14.248 "rw_ios_per_sec": 0, 00:12:14.248 "rw_mbytes_per_sec": 0, 00:12:14.248 "r_mbytes_per_sec": 0, 00:12:14.248 "w_mbytes_per_sec": 0 00:12:14.248 }, 00:12:14.248 "claimed": true, 00:12:14.248 "claim_type": "exclusive_write", 00:12:14.248 "zoned": false, 00:12:14.248 "supported_io_types": { 00:12:14.248 "read": true, 00:12:14.248 "write": true, 00:12:14.248 "unmap": true, 00:12:14.248 "flush": true, 00:12:14.248 "reset": true, 00:12:14.248 "nvme_admin": false, 00:12:14.248 "nvme_io": false, 00:12:14.248 "nvme_io_md": false, 00:12:14.248 "write_zeroes": true, 00:12:14.248 "zcopy": true, 00:12:14.248 "get_zone_info": false, 00:12:14.248 "zone_management": false, 00:12:14.248 "zone_append": false, 00:12:14.248 "compare": false, 00:12:14.248 "compare_and_write": false, 00:12:14.248 "abort": true, 00:12:14.248 "seek_hole": false, 00:12:14.248 "seek_data": false, 00:12:14.248 "copy": true, 00:12:14.248 "nvme_iov_md": false 00:12:14.248 }, 00:12:14.248 "memory_domains": [ 00:12:14.248 { 00:12:14.248 "dma_device_id": "system", 00:12:14.248 "dma_device_type": 1 00:12:14.248 }, 00:12:14.248 { 00:12:14.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.248 "dma_device_type": 2 00:12:14.248 } 00:12:14.248 ], 00:12:14.248 "driver_specific": {} 00:12:14.248 } 00:12:14.248 ] 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.248 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.507 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.507 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.507 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.507 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.507 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.507 "name": "Existed_Raid", 00:12:14.507 "uuid": "721a8d1e-aa82-4998-962f-480142e2b24a", 00:12:14.507 "strip_size_kb": 64, 00:12:14.507 "state": "online", 00:12:14.507 "raid_level": "concat", 00:12:14.507 "superblock": false, 00:12:14.507 "num_base_bdevs": 3, 00:12:14.507 "num_base_bdevs_discovered": 3, 00:12:14.507 "num_base_bdevs_operational": 3, 00:12:14.507 "base_bdevs_list": [ 00:12:14.507 { 00:12:14.507 "name": "BaseBdev1", 00:12:14.507 "uuid": "3b7b4c42-2e14-4791-a616-38f34c4e4c44", 00:12:14.507 "is_configured": true, 00:12:14.507 "data_offset": 0, 00:12:14.507 "data_size": 65536 00:12:14.507 }, 00:12:14.507 { 00:12:14.507 "name": "BaseBdev2", 00:12:14.507 "uuid": "9c40e515-68f5-483a-b059-0fa3eb5917ce", 00:12:14.507 "is_configured": true, 00:12:14.507 "data_offset": 0, 00:12:14.507 "data_size": 65536 00:12:14.507 }, 00:12:14.507 { 00:12:14.507 "name": "BaseBdev3", 00:12:14.507 "uuid": "91d90663-13ed-45e2-bfee-47f03cd901b6", 00:12:14.508 "is_configured": true, 00:12:14.508 "data_offset": 0, 00:12:14.508 "data_size": 65536 00:12:14.508 } 00:12:14.508 ] 00:12:14.508 }' 00:12:14.508 20:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.508 20:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.790 [2024-11-26 20:25:08.271075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.790 "name": "Existed_Raid", 00:12:14.790 "aliases": [ 00:12:14.790 "721a8d1e-aa82-4998-962f-480142e2b24a" 00:12:14.790 ], 00:12:14.790 "product_name": "Raid Volume", 00:12:14.790 "block_size": 512, 00:12:14.790 "num_blocks": 196608, 00:12:14.790 "uuid": "721a8d1e-aa82-4998-962f-480142e2b24a", 00:12:14.790 "assigned_rate_limits": { 00:12:14.790 "rw_ios_per_sec": 0, 00:12:14.790 "rw_mbytes_per_sec": 0, 00:12:14.790 "r_mbytes_per_sec": 0, 00:12:14.790 "w_mbytes_per_sec": 0 00:12:14.790 }, 00:12:14.790 "claimed": false, 00:12:14.790 "zoned": false, 00:12:14.790 "supported_io_types": { 00:12:14.790 "read": true, 00:12:14.790 "write": true, 00:12:14.790 "unmap": true, 00:12:14.790 "flush": true, 00:12:14.790 "reset": true, 00:12:14.790 "nvme_admin": false, 00:12:14.790 "nvme_io": false, 00:12:14.790 "nvme_io_md": false, 00:12:14.790 "write_zeroes": true, 00:12:14.790 "zcopy": false, 00:12:14.790 "get_zone_info": false, 00:12:14.790 "zone_management": false, 00:12:14.790 "zone_append": false, 00:12:14.790 "compare": false, 00:12:14.790 "compare_and_write": false, 00:12:14.790 "abort": false, 00:12:14.790 "seek_hole": false, 00:12:14.790 "seek_data": false, 00:12:14.790 "copy": false, 00:12:14.790 "nvme_iov_md": false 00:12:14.790 }, 00:12:14.790 "memory_domains": [ 00:12:14.790 { 00:12:14.790 "dma_device_id": "system", 00:12:14.790 "dma_device_type": 1 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.790 "dma_device_type": 2 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "dma_device_id": "system", 00:12:14.790 "dma_device_type": 1 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.790 "dma_device_type": 2 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "dma_device_id": "system", 00:12:14.790 "dma_device_type": 1 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.790 "dma_device_type": 2 00:12:14.790 } 00:12:14.790 ], 00:12:14.790 "driver_specific": { 00:12:14.790 "raid": { 00:12:14.790 "uuid": "721a8d1e-aa82-4998-962f-480142e2b24a", 00:12:14.790 "strip_size_kb": 64, 00:12:14.790 "state": "online", 00:12:14.790 "raid_level": "concat", 00:12:14.790 "superblock": false, 00:12:14.790 "num_base_bdevs": 3, 00:12:14.790 "num_base_bdevs_discovered": 3, 00:12:14.790 "num_base_bdevs_operational": 3, 00:12:14.790 "base_bdevs_list": [ 00:12:14.790 { 00:12:14.790 "name": "BaseBdev1", 00:12:14.790 "uuid": "3b7b4c42-2e14-4791-a616-38f34c4e4c44", 00:12:14.790 "is_configured": true, 00:12:14.790 "data_offset": 0, 00:12:14.790 "data_size": 65536 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "name": "BaseBdev2", 00:12:14.790 "uuid": "9c40e515-68f5-483a-b059-0fa3eb5917ce", 00:12:14.790 "is_configured": true, 00:12:14.790 "data_offset": 0, 00:12:14.790 "data_size": 65536 00:12:14.790 }, 00:12:14.790 { 00:12:14.790 "name": "BaseBdev3", 00:12:14.790 "uuid": "91d90663-13ed-45e2-bfee-47f03cd901b6", 00:12:14.790 "is_configured": true, 00:12:14.790 "data_offset": 0, 00:12:14.790 "data_size": 65536 00:12:14.790 } 00:12:14.790 ] 00:12:14.790 } 00:12:14.790 } 00:12:14.790 }' 00:12:14.790 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:15.049 BaseBdev2 00:12:15.049 BaseBdev3' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 [2024-11-26 20:25:08.542342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.049 [2024-11-26 20:25:08.542370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.049 [2024-11-26 20:25:08.542421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.308 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.308 "name": "Existed_Raid", 00:12:15.308 "uuid": "721a8d1e-aa82-4998-962f-480142e2b24a", 00:12:15.308 "strip_size_kb": 64, 00:12:15.308 "state": "offline", 00:12:15.308 "raid_level": "concat", 00:12:15.308 "superblock": false, 00:12:15.308 "num_base_bdevs": 3, 00:12:15.308 "num_base_bdevs_discovered": 2, 00:12:15.308 "num_base_bdevs_operational": 2, 00:12:15.308 "base_bdevs_list": [ 00:12:15.308 { 00:12:15.308 "name": null, 00:12:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.308 "is_configured": false, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 65536 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": "BaseBdev2", 00:12:15.308 "uuid": "9c40e515-68f5-483a-b059-0fa3eb5917ce", 00:12:15.308 "is_configured": true, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 65536 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": "BaseBdev3", 00:12:15.308 "uuid": "91d90663-13ed-45e2-bfee-47f03cd901b6", 00:12:15.308 "is_configured": true, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 65536 00:12:15.308 } 00:12:15.308 ] 00:12:15.308 }' 00:12:15.309 20:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.309 20:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.569 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.832 [2024-11-26 20:25:09.132921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.832 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.833 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:15.833 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.833 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.833 [2024-11-26 20:25:09.288203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.833 [2024-11-26 20:25:09.288322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.093 BaseBdev2 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.093 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 [ 00:12:16.094 { 00:12:16.094 "name": "BaseBdev2", 00:12:16.094 "aliases": [ 00:12:16.094 "d001ea37-3fa7-4d80-8bfa-442f7458e8df" 00:12:16.094 ], 00:12:16.094 "product_name": "Malloc disk", 00:12:16.094 "block_size": 512, 00:12:16.094 "num_blocks": 65536, 00:12:16.094 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:16.094 "assigned_rate_limits": { 00:12:16.094 "rw_ios_per_sec": 0, 00:12:16.094 "rw_mbytes_per_sec": 0, 00:12:16.094 "r_mbytes_per_sec": 0, 00:12:16.094 "w_mbytes_per_sec": 0 00:12:16.094 }, 00:12:16.094 "claimed": false, 00:12:16.094 "zoned": false, 00:12:16.094 "supported_io_types": { 00:12:16.094 "read": true, 00:12:16.094 "write": true, 00:12:16.094 "unmap": true, 00:12:16.094 "flush": true, 00:12:16.094 "reset": true, 00:12:16.094 "nvme_admin": false, 00:12:16.094 "nvme_io": false, 00:12:16.094 "nvme_io_md": false, 00:12:16.094 "write_zeroes": true, 00:12:16.094 "zcopy": true, 00:12:16.094 "get_zone_info": false, 00:12:16.094 "zone_management": false, 00:12:16.094 "zone_append": false, 00:12:16.094 "compare": false, 00:12:16.094 "compare_and_write": false, 00:12:16.094 "abort": true, 00:12:16.094 "seek_hole": false, 00:12:16.094 "seek_data": false, 00:12:16.094 "copy": true, 00:12:16.094 "nvme_iov_md": false 00:12:16.094 }, 00:12:16.094 "memory_domains": [ 00:12:16.094 { 00:12:16.094 "dma_device_id": "system", 00:12:16.094 "dma_device_type": 1 00:12:16.094 }, 00:12:16.094 { 00:12:16.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.094 "dma_device_type": 2 00:12:16.094 } 00:12:16.094 ], 00:12:16.094 "driver_specific": {} 00:12:16.094 } 00:12:16.094 ] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 BaseBdev3 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 [ 00:12:16.094 { 00:12:16.094 "name": "BaseBdev3", 00:12:16.094 "aliases": [ 00:12:16.094 "6cec8f59-a89e-4e49-a5ed-56b564c84885" 00:12:16.094 ], 00:12:16.094 "product_name": "Malloc disk", 00:12:16.094 "block_size": 512, 00:12:16.094 "num_blocks": 65536, 00:12:16.094 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:16.094 "assigned_rate_limits": { 00:12:16.094 "rw_ios_per_sec": 0, 00:12:16.094 "rw_mbytes_per_sec": 0, 00:12:16.094 "r_mbytes_per_sec": 0, 00:12:16.094 "w_mbytes_per_sec": 0 00:12:16.094 }, 00:12:16.094 "claimed": false, 00:12:16.094 "zoned": false, 00:12:16.094 "supported_io_types": { 00:12:16.094 "read": true, 00:12:16.094 "write": true, 00:12:16.094 "unmap": true, 00:12:16.094 "flush": true, 00:12:16.094 "reset": true, 00:12:16.094 "nvme_admin": false, 00:12:16.094 "nvme_io": false, 00:12:16.094 "nvme_io_md": false, 00:12:16.094 "write_zeroes": true, 00:12:16.094 "zcopy": true, 00:12:16.094 "get_zone_info": false, 00:12:16.094 "zone_management": false, 00:12:16.094 "zone_append": false, 00:12:16.094 "compare": false, 00:12:16.094 "compare_and_write": false, 00:12:16.094 "abort": true, 00:12:16.094 "seek_hole": false, 00:12:16.094 "seek_data": false, 00:12:16.094 "copy": true, 00:12:16.094 "nvme_iov_md": false 00:12:16.094 }, 00:12:16.094 "memory_domains": [ 00:12:16.094 { 00:12:16.094 "dma_device_id": "system", 00:12:16.094 "dma_device_type": 1 00:12:16.094 }, 00:12:16.094 { 00:12:16.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.094 "dma_device_type": 2 00:12:16.094 } 00:12:16.094 ], 00:12:16.094 "driver_specific": {} 00:12:16.094 } 00:12:16.094 ] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 [2024-11-26 20:25:09.594780] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.094 [2024-11-26 20:25:09.594884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.094 [2024-11-26 20:25:09.594954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.094 [2024-11-26 20:25:09.597059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.354 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.354 "name": "Existed_Raid", 00:12:16.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.354 "strip_size_kb": 64, 00:12:16.354 "state": "configuring", 00:12:16.354 "raid_level": "concat", 00:12:16.354 "superblock": false, 00:12:16.354 "num_base_bdevs": 3, 00:12:16.354 "num_base_bdevs_discovered": 2, 00:12:16.354 "num_base_bdevs_operational": 3, 00:12:16.354 "base_bdevs_list": [ 00:12:16.354 { 00:12:16.354 "name": "BaseBdev1", 00:12:16.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.354 "is_configured": false, 00:12:16.354 "data_offset": 0, 00:12:16.354 "data_size": 0 00:12:16.354 }, 00:12:16.354 { 00:12:16.354 "name": "BaseBdev2", 00:12:16.354 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:16.354 "is_configured": true, 00:12:16.354 "data_offset": 0, 00:12:16.354 "data_size": 65536 00:12:16.354 }, 00:12:16.354 { 00:12:16.354 "name": "BaseBdev3", 00:12:16.354 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:16.354 "is_configured": true, 00:12:16.354 "data_offset": 0, 00:12:16.354 "data_size": 65536 00:12:16.354 } 00:12:16.354 ] 00:12:16.354 }' 00:12:16.354 20:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.354 20:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.614 [2024-11-26 20:25:10.010131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.614 "name": "Existed_Raid", 00:12:16.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.614 "strip_size_kb": 64, 00:12:16.614 "state": "configuring", 00:12:16.614 "raid_level": "concat", 00:12:16.614 "superblock": false, 00:12:16.614 "num_base_bdevs": 3, 00:12:16.614 "num_base_bdevs_discovered": 1, 00:12:16.614 "num_base_bdevs_operational": 3, 00:12:16.614 "base_bdevs_list": [ 00:12:16.614 { 00:12:16.614 "name": "BaseBdev1", 00:12:16.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.614 "is_configured": false, 00:12:16.614 "data_offset": 0, 00:12:16.614 "data_size": 0 00:12:16.614 }, 00:12:16.614 { 00:12:16.614 "name": null, 00:12:16.614 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:16.614 "is_configured": false, 00:12:16.614 "data_offset": 0, 00:12:16.614 "data_size": 65536 00:12:16.614 }, 00:12:16.614 { 00:12:16.614 "name": "BaseBdev3", 00:12:16.614 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:16.614 "is_configured": true, 00:12:16.614 "data_offset": 0, 00:12:16.614 "data_size": 65536 00:12:16.614 } 00:12:16.614 ] 00:12:16.614 }' 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.614 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 [2024-11-26 20:25:10.535120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.181 BaseBdev1 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.181 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.182 [ 00:12:17.182 { 00:12:17.182 "name": "BaseBdev1", 00:12:17.182 "aliases": [ 00:12:17.182 "88af0086-23b3-462f-b1cc-175c7a3bc41d" 00:12:17.182 ], 00:12:17.182 "product_name": "Malloc disk", 00:12:17.182 "block_size": 512, 00:12:17.182 "num_blocks": 65536, 00:12:17.182 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:17.182 "assigned_rate_limits": { 00:12:17.182 "rw_ios_per_sec": 0, 00:12:17.182 "rw_mbytes_per_sec": 0, 00:12:17.182 "r_mbytes_per_sec": 0, 00:12:17.182 "w_mbytes_per_sec": 0 00:12:17.182 }, 00:12:17.182 "claimed": true, 00:12:17.182 "claim_type": "exclusive_write", 00:12:17.182 "zoned": false, 00:12:17.182 "supported_io_types": { 00:12:17.182 "read": true, 00:12:17.182 "write": true, 00:12:17.182 "unmap": true, 00:12:17.182 "flush": true, 00:12:17.182 "reset": true, 00:12:17.182 "nvme_admin": false, 00:12:17.182 "nvme_io": false, 00:12:17.182 "nvme_io_md": false, 00:12:17.182 "write_zeroes": true, 00:12:17.182 "zcopy": true, 00:12:17.182 "get_zone_info": false, 00:12:17.182 "zone_management": false, 00:12:17.182 "zone_append": false, 00:12:17.182 "compare": false, 00:12:17.182 "compare_and_write": false, 00:12:17.182 "abort": true, 00:12:17.182 "seek_hole": false, 00:12:17.182 "seek_data": false, 00:12:17.182 "copy": true, 00:12:17.182 "nvme_iov_md": false 00:12:17.182 }, 00:12:17.182 "memory_domains": [ 00:12:17.182 { 00:12:17.182 "dma_device_id": "system", 00:12:17.182 "dma_device_type": 1 00:12:17.182 }, 00:12:17.182 { 00:12:17.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.182 "dma_device_type": 2 00:12:17.182 } 00:12:17.182 ], 00:12:17.182 "driver_specific": {} 00:12:17.182 } 00:12:17.182 ] 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.182 "name": "Existed_Raid", 00:12:17.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.182 "strip_size_kb": 64, 00:12:17.182 "state": "configuring", 00:12:17.182 "raid_level": "concat", 00:12:17.182 "superblock": false, 00:12:17.182 "num_base_bdevs": 3, 00:12:17.182 "num_base_bdevs_discovered": 2, 00:12:17.182 "num_base_bdevs_operational": 3, 00:12:17.182 "base_bdevs_list": [ 00:12:17.182 { 00:12:17.182 "name": "BaseBdev1", 00:12:17.182 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:17.182 "is_configured": true, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 65536 00:12:17.182 }, 00:12:17.182 { 00:12:17.182 "name": null, 00:12:17.182 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:17.182 "is_configured": false, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 65536 00:12:17.182 }, 00:12:17.182 { 00:12:17.182 "name": "BaseBdev3", 00:12:17.182 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:17.182 "is_configured": true, 00:12:17.182 "data_offset": 0, 00:12:17.182 "data_size": 65536 00:12:17.182 } 00:12:17.182 ] 00:12:17.182 }' 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.182 20:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.752 [2024-11-26 20:25:11.054284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.752 "name": "Existed_Raid", 00:12:17.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.752 "strip_size_kb": 64, 00:12:17.752 "state": "configuring", 00:12:17.752 "raid_level": "concat", 00:12:17.752 "superblock": false, 00:12:17.752 "num_base_bdevs": 3, 00:12:17.752 "num_base_bdevs_discovered": 1, 00:12:17.752 "num_base_bdevs_operational": 3, 00:12:17.752 "base_bdevs_list": [ 00:12:17.752 { 00:12:17.752 "name": "BaseBdev1", 00:12:17.752 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:17.752 "is_configured": true, 00:12:17.752 "data_offset": 0, 00:12:17.752 "data_size": 65536 00:12:17.752 }, 00:12:17.752 { 00:12:17.752 "name": null, 00:12:17.752 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:17.752 "is_configured": false, 00:12:17.752 "data_offset": 0, 00:12:17.752 "data_size": 65536 00:12:17.752 }, 00:12:17.752 { 00:12:17.752 "name": null, 00:12:17.752 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:17.752 "is_configured": false, 00:12:17.752 "data_offset": 0, 00:12:17.752 "data_size": 65536 00:12:17.752 } 00:12:17.752 ] 00:12:17.752 }' 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.752 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.011 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.011 [2024-11-26 20:25:11.561478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.275 "name": "Existed_Raid", 00:12:18.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.275 "strip_size_kb": 64, 00:12:18.275 "state": "configuring", 00:12:18.275 "raid_level": "concat", 00:12:18.275 "superblock": false, 00:12:18.275 "num_base_bdevs": 3, 00:12:18.275 "num_base_bdevs_discovered": 2, 00:12:18.275 "num_base_bdevs_operational": 3, 00:12:18.275 "base_bdevs_list": [ 00:12:18.275 { 00:12:18.275 "name": "BaseBdev1", 00:12:18.275 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:18.275 "is_configured": true, 00:12:18.275 "data_offset": 0, 00:12:18.275 "data_size": 65536 00:12:18.275 }, 00:12:18.275 { 00:12:18.275 "name": null, 00:12:18.275 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:18.275 "is_configured": false, 00:12:18.275 "data_offset": 0, 00:12:18.275 "data_size": 65536 00:12:18.275 }, 00:12:18.275 { 00:12:18.275 "name": "BaseBdev3", 00:12:18.275 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:18.275 "is_configured": true, 00:12:18.275 "data_offset": 0, 00:12:18.275 "data_size": 65536 00:12:18.275 } 00:12:18.275 ] 00:12:18.275 }' 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.275 20:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.533 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.533 [2024-11-26 20:25:12.076710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.791 "name": "Existed_Raid", 00:12:18.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.791 "strip_size_kb": 64, 00:12:18.791 "state": "configuring", 00:12:18.791 "raid_level": "concat", 00:12:18.791 "superblock": false, 00:12:18.791 "num_base_bdevs": 3, 00:12:18.791 "num_base_bdevs_discovered": 1, 00:12:18.791 "num_base_bdevs_operational": 3, 00:12:18.791 "base_bdevs_list": [ 00:12:18.791 { 00:12:18.791 "name": null, 00:12:18.791 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:18.791 "is_configured": false, 00:12:18.791 "data_offset": 0, 00:12:18.791 "data_size": 65536 00:12:18.791 }, 00:12:18.791 { 00:12:18.791 "name": null, 00:12:18.791 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:18.791 "is_configured": false, 00:12:18.791 "data_offset": 0, 00:12:18.791 "data_size": 65536 00:12:18.791 }, 00:12:18.791 { 00:12:18.791 "name": "BaseBdev3", 00:12:18.791 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:18.791 "is_configured": true, 00:12:18.791 "data_offset": 0, 00:12:18.791 "data_size": 65536 00:12:18.791 } 00:12:18.791 ] 00:12:18.791 }' 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.791 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.359 [2024-11-26 20:25:12.685768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.359 "name": "Existed_Raid", 00:12:19.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.359 "strip_size_kb": 64, 00:12:19.359 "state": "configuring", 00:12:19.359 "raid_level": "concat", 00:12:19.359 "superblock": false, 00:12:19.359 "num_base_bdevs": 3, 00:12:19.359 "num_base_bdevs_discovered": 2, 00:12:19.359 "num_base_bdevs_operational": 3, 00:12:19.359 "base_bdevs_list": [ 00:12:19.359 { 00:12:19.359 "name": null, 00:12:19.359 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:19.359 "is_configured": false, 00:12:19.359 "data_offset": 0, 00:12:19.359 "data_size": 65536 00:12:19.359 }, 00:12:19.359 { 00:12:19.359 "name": "BaseBdev2", 00:12:19.359 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:19.359 "is_configured": true, 00:12:19.359 "data_offset": 0, 00:12:19.359 "data_size": 65536 00:12:19.359 }, 00:12:19.359 { 00:12:19.359 "name": "BaseBdev3", 00:12:19.359 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:19.359 "is_configured": true, 00:12:19.359 "data_offset": 0, 00:12:19.359 "data_size": 65536 00:12:19.359 } 00:12:19.359 ] 00:12:19.359 }' 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.359 20:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.618 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.618 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.618 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.618 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88af0086-23b3-462f-b1cc-175c7a3bc41d 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.877 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.877 [2024-11-26 20:25:13.308229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:19.877 [2024-11-26 20:25:13.308384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:19.877 [2024-11-26 20:25:13.308412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:19.877 [2024-11-26 20:25:13.308713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:19.877 [2024-11-26 20:25:13.308919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:19.877 [2024-11-26 20:25:13.308960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:19.877 [2024-11-26 20:25:13.309284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.877 NewBaseBdev 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.878 [ 00:12:19.878 { 00:12:19.878 "name": "NewBaseBdev", 00:12:19.878 "aliases": [ 00:12:19.878 "88af0086-23b3-462f-b1cc-175c7a3bc41d" 00:12:19.878 ], 00:12:19.878 "product_name": "Malloc disk", 00:12:19.878 "block_size": 512, 00:12:19.878 "num_blocks": 65536, 00:12:19.878 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:19.878 "assigned_rate_limits": { 00:12:19.878 "rw_ios_per_sec": 0, 00:12:19.878 "rw_mbytes_per_sec": 0, 00:12:19.878 "r_mbytes_per_sec": 0, 00:12:19.878 "w_mbytes_per_sec": 0 00:12:19.878 }, 00:12:19.878 "claimed": true, 00:12:19.878 "claim_type": "exclusive_write", 00:12:19.878 "zoned": false, 00:12:19.878 "supported_io_types": { 00:12:19.878 "read": true, 00:12:19.878 "write": true, 00:12:19.878 "unmap": true, 00:12:19.878 "flush": true, 00:12:19.878 "reset": true, 00:12:19.878 "nvme_admin": false, 00:12:19.878 "nvme_io": false, 00:12:19.878 "nvme_io_md": false, 00:12:19.878 "write_zeroes": true, 00:12:19.878 "zcopy": true, 00:12:19.878 "get_zone_info": false, 00:12:19.878 "zone_management": false, 00:12:19.878 "zone_append": false, 00:12:19.878 "compare": false, 00:12:19.878 "compare_and_write": false, 00:12:19.878 "abort": true, 00:12:19.878 "seek_hole": false, 00:12:19.878 "seek_data": false, 00:12:19.878 "copy": true, 00:12:19.878 "nvme_iov_md": false 00:12:19.878 }, 00:12:19.878 "memory_domains": [ 00:12:19.878 { 00:12:19.878 "dma_device_id": "system", 00:12:19.878 "dma_device_type": 1 00:12:19.878 }, 00:12:19.878 { 00:12:19.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.878 "dma_device_type": 2 00:12:19.878 } 00:12:19.878 ], 00:12:19.878 "driver_specific": {} 00:12:19.878 } 00:12:19.878 ] 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.878 "name": "Existed_Raid", 00:12:19.878 "uuid": "47e697c5-0049-4e66-a981-fff2454d93c2", 00:12:19.878 "strip_size_kb": 64, 00:12:19.878 "state": "online", 00:12:19.878 "raid_level": "concat", 00:12:19.878 "superblock": false, 00:12:19.878 "num_base_bdevs": 3, 00:12:19.878 "num_base_bdevs_discovered": 3, 00:12:19.878 "num_base_bdevs_operational": 3, 00:12:19.878 "base_bdevs_list": [ 00:12:19.878 { 00:12:19.878 "name": "NewBaseBdev", 00:12:19.878 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:19.878 "is_configured": true, 00:12:19.878 "data_offset": 0, 00:12:19.878 "data_size": 65536 00:12:19.878 }, 00:12:19.878 { 00:12:19.878 "name": "BaseBdev2", 00:12:19.878 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:19.878 "is_configured": true, 00:12:19.878 "data_offset": 0, 00:12:19.878 "data_size": 65536 00:12:19.878 }, 00:12:19.878 { 00:12:19.878 "name": "BaseBdev3", 00:12:19.878 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:19.878 "is_configured": true, 00:12:19.878 "data_offset": 0, 00:12:19.878 "data_size": 65536 00:12:19.878 } 00:12:19.878 ] 00:12:19.878 }' 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.878 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.447 [2024-11-26 20:25:13.803791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.447 "name": "Existed_Raid", 00:12:20.447 "aliases": [ 00:12:20.447 "47e697c5-0049-4e66-a981-fff2454d93c2" 00:12:20.447 ], 00:12:20.447 "product_name": "Raid Volume", 00:12:20.447 "block_size": 512, 00:12:20.447 "num_blocks": 196608, 00:12:20.447 "uuid": "47e697c5-0049-4e66-a981-fff2454d93c2", 00:12:20.447 "assigned_rate_limits": { 00:12:20.447 "rw_ios_per_sec": 0, 00:12:20.447 "rw_mbytes_per_sec": 0, 00:12:20.447 "r_mbytes_per_sec": 0, 00:12:20.447 "w_mbytes_per_sec": 0 00:12:20.447 }, 00:12:20.447 "claimed": false, 00:12:20.447 "zoned": false, 00:12:20.447 "supported_io_types": { 00:12:20.447 "read": true, 00:12:20.447 "write": true, 00:12:20.447 "unmap": true, 00:12:20.447 "flush": true, 00:12:20.447 "reset": true, 00:12:20.447 "nvme_admin": false, 00:12:20.447 "nvme_io": false, 00:12:20.447 "nvme_io_md": false, 00:12:20.447 "write_zeroes": true, 00:12:20.447 "zcopy": false, 00:12:20.447 "get_zone_info": false, 00:12:20.447 "zone_management": false, 00:12:20.447 "zone_append": false, 00:12:20.447 "compare": false, 00:12:20.447 "compare_and_write": false, 00:12:20.447 "abort": false, 00:12:20.447 "seek_hole": false, 00:12:20.447 "seek_data": false, 00:12:20.447 "copy": false, 00:12:20.447 "nvme_iov_md": false 00:12:20.447 }, 00:12:20.447 "memory_domains": [ 00:12:20.447 { 00:12:20.447 "dma_device_id": "system", 00:12:20.447 "dma_device_type": 1 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.447 "dma_device_type": 2 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "dma_device_id": "system", 00:12:20.447 "dma_device_type": 1 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.447 "dma_device_type": 2 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "dma_device_id": "system", 00:12:20.447 "dma_device_type": 1 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.447 "dma_device_type": 2 00:12:20.447 } 00:12:20.447 ], 00:12:20.447 "driver_specific": { 00:12:20.447 "raid": { 00:12:20.447 "uuid": "47e697c5-0049-4e66-a981-fff2454d93c2", 00:12:20.447 "strip_size_kb": 64, 00:12:20.447 "state": "online", 00:12:20.447 "raid_level": "concat", 00:12:20.447 "superblock": false, 00:12:20.447 "num_base_bdevs": 3, 00:12:20.447 "num_base_bdevs_discovered": 3, 00:12:20.447 "num_base_bdevs_operational": 3, 00:12:20.447 "base_bdevs_list": [ 00:12:20.447 { 00:12:20.447 "name": "NewBaseBdev", 00:12:20.447 "uuid": "88af0086-23b3-462f-b1cc-175c7a3bc41d", 00:12:20.447 "is_configured": true, 00:12:20.447 "data_offset": 0, 00:12:20.447 "data_size": 65536 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "name": "BaseBdev2", 00:12:20.447 "uuid": "d001ea37-3fa7-4d80-8bfa-442f7458e8df", 00:12:20.447 "is_configured": true, 00:12:20.447 "data_offset": 0, 00:12:20.447 "data_size": 65536 00:12:20.447 }, 00:12:20.447 { 00:12:20.447 "name": "BaseBdev3", 00:12:20.447 "uuid": "6cec8f59-a89e-4e49-a5ed-56b564c84885", 00:12:20.447 "is_configured": true, 00:12:20.447 "data_offset": 0, 00:12:20.447 "data_size": 65536 00:12:20.447 } 00:12:20.447 ] 00:12:20.447 } 00:12:20.447 } 00:12:20.447 }' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:20.447 BaseBdev2 00:12:20.447 BaseBdev3' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:20.447 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.448 20:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.707 [2024-11-26 20:25:14.023066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.707 [2024-11-26 20:25:14.023094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.707 [2024-11-26 20:25:14.023172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.707 [2024-11-26 20:25:14.023229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.707 [2024-11-26 20:25:14.023253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65909 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65909 ']' 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65909 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65909 00:12:20.707 killing process with pid 65909 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65909' 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65909 00:12:20.707 [2024-11-26 20:25:14.069643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.707 20:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65909 00:12:20.965 [2024-11-26 20:25:14.382032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:22.341 00:12:22.341 real 0m10.775s 00:12:22.341 user 0m17.099s 00:12:22.341 sys 0m1.791s 00:12:22.341 ************************************ 00:12:22.341 END TEST raid_state_function_test 00:12:22.341 ************************************ 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.341 20:25:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:22.341 20:25:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:22.341 20:25:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.341 20:25:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.341 ************************************ 00:12:22.341 START TEST raid_state_function_test_sb 00:12:22.341 ************************************ 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.341 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:22.342 Process raid pid: 66530 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66530 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66530' 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66530 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66530 ']' 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.342 20:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.342 [2024-11-26 20:25:15.762791] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:22.342 [2024-11-26 20:25:15.762996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.601 [2024-11-26 20:25:15.938745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.601 [2024-11-26 20:25:16.066966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.859 [2024-11-26 20:25:16.276065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.859 [2024-11-26 20:25:16.276116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 [2024-11-26 20:25:16.616014] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.118 [2024-11-26 20:25:16.616073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.118 [2024-11-26 20:25:16.616084] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.118 [2024-11-26 20:25:16.616112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.118 [2024-11-26 20:25:16.616119] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:23.118 [2024-11-26 20:25:16.616129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.377 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.377 "name": "Existed_Raid", 00:12:23.377 "uuid": "cf03a75e-4f3d-43fd-b70b-1b3584a8aa05", 00:12:23.377 "strip_size_kb": 64, 00:12:23.377 "state": "configuring", 00:12:23.377 "raid_level": "concat", 00:12:23.377 "superblock": true, 00:12:23.377 "num_base_bdevs": 3, 00:12:23.377 "num_base_bdevs_discovered": 0, 00:12:23.377 "num_base_bdevs_operational": 3, 00:12:23.377 "base_bdevs_list": [ 00:12:23.377 { 00:12:23.377 "name": "BaseBdev1", 00:12:23.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.377 "is_configured": false, 00:12:23.377 "data_offset": 0, 00:12:23.377 "data_size": 0 00:12:23.377 }, 00:12:23.377 { 00:12:23.377 "name": "BaseBdev2", 00:12:23.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.377 "is_configured": false, 00:12:23.377 "data_offset": 0, 00:12:23.377 "data_size": 0 00:12:23.377 }, 00:12:23.377 { 00:12:23.377 "name": "BaseBdev3", 00:12:23.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.377 "is_configured": false, 00:12:23.377 "data_offset": 0, 00:12:23.377 "data_size": 0 00:12:23.377 } 00:12:23.377 ] 00:12:23.377 }' 00:12:23.377 20:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.377 20:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.636 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.636 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.636 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.636 [2024-11-26 20:25:17.091224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.636 [2024-11-26 20:25:17.091345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:23.636 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.636 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:23.636 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.637 [2024-11-26 20:25:17.103192] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.637 [2024-11-26 20:25:17.103299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.637 [2024-11-26 20:25:17.103357] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.637 [2024-11-26 20:25:17.103386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.637 [2024-11-26 20:25:17.103444] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:23.637 [2024-11-26 20:25:17.103478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.637 [2024-11-26 20:25:17.154026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.637 BaseBdev1 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.637 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.637 [ 00:12:23.637 { 00:12:23.637 "name": "BaseBdev1", 00:12:23.637 "aliases": [ 00:12:23.637 "56f42cd6-ef3f-4859-8929-76e8c83ebf69" 00:12:23.637 ], 00:12:23.637 "product_name": "Malloc disk", 00:12:23.637 "block_size": 512, 00:12:23.637 "num_blocks": 65536, 00:12:23.637 "uuid": "56f42cd6-ef3f-4859-8929-76e8c83ebf69", 00:12:23.637 "assigned_rate_limits": { 00:12:23.637 "rw_ios_per_sec": 0, 00:12:23.637 "rw_mbytes_per_sec": 0, 00:12:23.637 "r_mbytes_per_sec": 0, 00:12:23.637 "w_mbytes_per_sec": 0 00:12:23.637 }, 00:12:23.637 "claimed": true, 00:12:23.637 "claim_type": "exclusive_write", 00:12:23.637 "zoned": false, 00:12:23.637 "supported_io_types": { 00:12:23.637 "read": true, 00:12:23.637 "write": true, 00:12:23.637 "unmap": true, 00:12:23.637 "flush": true, 00:12:23.637 "reset": true, 00:12:23.637 "nvme_admin": false, 00:12:23.637 "nvme_io": false, 00:12:23.637 "nvme_io_md": false, 00:12:23.637 "write_zeroes": true, 00:12:23.637 "zcopy": true, 00:12:23.637 "get_zone_info": false, 00:12:23.637 "zone_management": false, 00:12:23.637 "zone_append": false, 00:12:23.637 "compare": false, 00:12:23.637 "compare_and_write": false, 00:12:23.637 "abort": true, 00:12:23.896 "seek_hole": false, 00:12:23.896 "seek_data": false, 00:12:23.896 "copy": true, 00:12:23.896 "nvme_iov_md": false 00:12:23.896 }, 00:12:23.896 "memory_domains": [ 00:12:23.896 { 00:12:23.896 "dma_device_id": "system", 00:12:23.896 "dma_device_type": 1 00:12:23.896 }, 00:12:23.896 { 00:12:23.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.896 "dma_device_type": 2 00:12:23.896 } 00:12:23.896 ], 00:12:23.896 "driver_specific": {} 00:12:23.896 } 00:12:23.896 ] 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.896 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.897 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.897 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.897 "name": "Existed_Raid", 00:12:23.897 "uuid": "335d7bca-864e-461d-af91-6ae3ffe3afd6", 00:12:23.897 "strip_size_kb": 64, 00:12:23.897 "state": "configuring", 00:12:23.897 "raid_level": "concat", 00:12:23.897 "superblock": true, 00:12:23.897 "num_base_bdevs": 3, 00:12:23.897 "num_base_bdevs_discovered": 1, 00:12:23.897 "num_base_bdevs_operational": 3, 00:12:23.897 "base_bdevs_list": [ 00:12:23.897 { 00:12:23.897 "name": "BaseBdev1", 00:12:23.897 "uuid": "56f42cd6-ef3f-4859-8929-76e8c83ebf69", 00:12:23.897 "is_configured": true, 00:12:23.897 "data_offset": 2048, 00:12:23.897 "data_size": 63488 00:12:23.897 }, 00:12:23.897 { 00:12:23.897 "name": "BaseBdev2", 00:12:23.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.897 "is_configured": false, 00:12:23.897 "data_offset": 0, 00:12:23.897 "data_size": 0 00:12:23.897 }, 00:12:23.897 { 00:12:23.897 "name": "BaseBdev3", 00:12:23.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.897 "is_configured": false, 00:12:23.897 "data_offset": 0, 00:12:23.897 "data_size": 0 00:12:23.897 } 00:12:23.897 ] 00:12:23.897 }' 00:12:23.897 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.897 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.157 [2024-11-26 20:25:17.625364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.157 [2024-11-26 20:25:17.625422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.157 [2024-11-26 20:25:17.633389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.157 [2024-11-26 20:25:17.635131] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.157 [2024-11-26 20:25:17.635171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.157 [2024-11-26 20:25:17.635180] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:24.157 [2024-11-26 20:25:17.635188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.157 "name": "Existed_Raid", 00:12:24.157 "uuid": "b85f5a93-5979-43cc-a03f-2c5e2e675799", 00:12:24.157 "strip_size_kb": 64, 00:12:24.157 "state": "configuring", 00:12:24.157 "raid_level": "concat", 00:12:24.157 "superblock": true, 00:12:24.157 "num_base_bdevs": 3, 00:12:24.157 "num_base_bdevs_discovered": 1, 00:12:24.157 "num_base_bdevs_operational": 3, 00:12:24.157 "base_bdevs_list": [ 00:12:24.157 { 00:12:24.157 "name": "BaseBdev1", 00:12:24.157 "uuid": "56f42cd6-ef3f-4859-8929-76e8c83ebf69", 00:12:24.157 "is_configured": true, 00:12:24.157 "data_offset": 2048, 00:12:24.157 "data_size": 63488 00:12:24.157 }, 00:12:24.157 { 00:12:24.157 "name": "BaseBdev2", 00:12:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.157 "is_configured": false, 00:12:24.157 "data_offset": 0, 00:12:24.157 "data_size": 0 00:12:24.157 }, 00:12:24.157 { 00:12:24.157 "name": "BaseBdev3", 00:12:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.157 "is_configured": false, 00:12:24.157 "data_offset": 0, 00:12:24.157 "data_size": 0 00:12:24.157 } 00:12:24.157 ] 00:12:24.157 }' 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.157 20:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 [2024-11-26 20:25:18.106600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.726 BaseBdev2 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 [ 00:12:24.726 { 00:12:24.726 "name": "BaseBdev2", 00:12:24.726 "aliases": [ 00:12:24.726 "2b2b8c39-6e00-484a-bfc6-f8ab0bfdcc8d" 00:12:24.726 ], 00:12:24.726 "product_name": "Malloc disk", 00:12:24.726 "block_size": 512, 00:12:24.726 "num_blocks": 65536, 00:12:24.726 "uuid": "2b2b8c39-6e00-484a-bfc6-f8ab0bfdcc8d", 00:12:24.726 "assigned_rate_limits": { 00:12:24.726 "rw_ios_per_sec": 0, 00:12:24.726 "rw_mbytes_per_sec": 0, 00:12:24.726 "r_mbytes_per_sec": 0, 00:12:24.726 "w_mbytes_per_sec": 0 00:12:24.726 }, 00:12:24.726 "claimed": true, 00:12:24.726 "claim_type": "exclusive_write", 00:12:24.726 "zoned": false, 00:12:24.726 "supported_io_types": { 00:12:24.726 "read": true, 00:12:24.726 "write": true, 00:12:24.726 "unmap": true, 00:12:24.726 "flush": true, 00:12:24.726 "reset": true, 00:12:24.726 "nvme_admin": false, 00:12:24.726 "nvme_io": false, 00:12:24.726 "nvme_io_md": false, 00:12:24.726 "write_zeroes": true, 00:12:24.726 "zcopy": true, 00:12:24.726 "get_zone_info": false, 00:12:24.726 "zone_management": false, 00:12:24.726 "zone_append": false, 00:12:24.726 "compare": false, 00:12:24.726 "compare_and_write": false, 00:12:24.726 "abort": true, 00:12:24.726 "seek_hole": false, 00:12:24.726 "seek_data": false, 00:12:24.726 "copy": true, 00:12:24.726 "nvme_iov_md": false 00:12:24.726 }, 00:12:24.726 "memory_domains": [ 00:12:24.726 { 00:12:24.726 "dma_device_id": "system", 00:12:24.726 "dma_device_type": 1 00:12:24.726 }, 00:12:24.726 { 00:12:24.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.726 "dma_device_type": 2 00:12:24.726 } 00:12:24.726 ], 00:12:24.726 "driver_specific": {} 00:12:24.726 } 00:12:24.726 ] 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.726 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.727 "name": "Existed_Raid", 00:12:24.727 "uuid": "b85f5a93-5979-43cc-a03f-2c5e2e675799", 00:12:24.727 "strip_size_kb": 64, 00:12:24.727 "state": "configuring", 00:12:24.727 "raid_level": "concat", 00:12:24.727 "superblock": true, 00:12:24.727 "num_base_bdevs": 3, 00:12:24.727 "num_base_bdevs_discovered": 2, 00:12:24.727 "num_base_bdevs_operational": 3, 00:12:24.727 "base_bdevs_list": [ 00:12:24.727 { 00:12:24.727 "name": "BaseBdev1", 00:12:24.727 "uuid": "56f42cd6-ef3f-4859-8929-76e8c83ebf69", 00:12:24.727 "is_configured": true, 00:12:24.727 "data_offset": 2048, 00:12:24.727 "data_size": 63488 00:12:24.727 }, 00:12:24.727 { 00:12:24.727 "name": "BaseBdev2", 00:12:24.727 "uuid": "2b2b8c39-6e00-484a-bfc6-f8ab0bfdcc8d", 00:12:24.727 "is_configured": true, 00:12:24.727 "data_offset": 2048, 00:12:24.727 "data_size": 63488 00:12:24.727 }, 00:12:24.727 { 00:12:24.727 "name": "BaseBdev3", 00:12:24.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.727 "is_configured": false, 00:12:24.727 "data_offset": 0, 00:12:24.727 "data_size": 0 00:12:24.727 } 00:12:24.727 ] 00:12:24.727 }' 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.727 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.986 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.986 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.986 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 [2024-11-26 20:25:18.585669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.245 [2024-11-26 20:25:18.586081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:25.245 [2024-11-26 20:25:18.586153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:25.245 [2024-11-26 20:25:18.586458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:25.245 [2024-11-26 20:25:18.586676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:25.245 [2024-11-26 20:25:18.586722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:25.245 BaseBdev3 00:12:25.245 [2024-11-26 20:25:18.586934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.245 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 [ 00:12:25.245 { 00:12:25.245 "name": "BaseBdev3", 00:12:25.245 "aliases": [ 00:12:25.245 "ff6c3314-687a-4ae2-a68f-3bf5d17b5733" 00:12:25.245 ], 00:12:25.245 "product_name": "Malloc disk", 00:12:25.245 "block_size": 512, 00:12:25.245 "num_blocks": 65536, 00:12:25.245 "uuid": "ff6c3314-687a-4ae2-a68f-3bf5d17b5733", 00:12:25.245 "assigned_rate_limits": { 00:12:25.245 "rw_ios_per_sec": 0, 00:12:25.245 "rw_mbytes_per_sec": 0, 00:12:25.245 "r_mbytes_per_sec": 0, 00:12:25.245 "w_mbytes_per_sec": 0 00:12:25.245 }, 00:12:25.245 "claimed": true, 00:12:25.245 "claim_type": "exclusive_write", 00:12:25.245 "zoned": false, 00:12:25.245 "supported_io_types": { 00:12:25.245 "read": true, 00:12:25.245 "write": true, 00:12:25.245 "unmap": true, 00:12:25.245 "flush": true, 00:12:25.245 "reset": true, 00:12:25.245 "nvme_admin": false, 00:12:25.245 "nvme_io": false, 00:12:25.245 "nvme_io_md": false, 00:12:25.245 "write_zeroes": true, 00:12:25.245 "zcopy": true, 00:12:25.245 "get_zone_info": false, 00:12:25.245 "zone_management": false, 00:12:25.245 "zone_append": false, 00:12:25.245 "compare": false, 00:12:25.245 "compare_and_write": false, 00:12:25.245 "abort": true, 00:12:25.245 "seek_hole": false, 00:12:25.245 "seek_data": false, 00:12:25.245 "copy": true, 00:12:25.245 "nvme_iov_md": false 00:12:25.245 }, 00:12:25.245 "memory_domains": [ 00:12:25.245 { 00:12:25.245 "dma_device_id": "system", 00:12:25.245 "dma_device_type": 1 00:12:25.245 }, 00:12:25.245 { 00:12:25.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.245 "dma_device_type": 2 00:12:25.245 } 00:12:25.245 ], 00:12:25.246 "driver_specific": {} 00:12:25.246 } 00:12:25.246 ] 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.246 "name": "Existed_Raid", 00:12:25.246 "uuid": "b85f5a93-5979-43cc-a03f-2c5e2e675799", 00:12:25.246 "strip_size_kb": 64, 00:12:25.246 "state": "online", 00:12:25.246 "raid_level": "concat", 00:12:25.246 "superblock": true, 00:12:25.246 "num_base_bdevs": 3, 00:12:25.246 "num_base_bdevs_discovered": 3, 00:12:25.246 "num_base_bdevs_operational": 3, 00:12:25.246 "base_bdevs_list": [ 00:12:25.246 { 00:12:25.246 "name": "BaseBdev1", 00:12:25.246 "uuid": "56f42cd6-ef3f-4859-8929-76e8c83ebf69", 00:12:25.246 "is_configured": true, 00:12:25.246 "data_offset": 2048, 00:12:25.246 "data_size": 63488 00:12:25.246 }, 00:12:25.246 { 00:12:25.246 "name": "BaseBdev2", 00:12:25.246 "uuid": "2b2b8c39-6e00-484a-bfc6-f8ab0bfdcc8d", 00:12:25.246 "is_configured": true, 00:12:25.246 "data_offset": 2048, 00:12:25.246 "data_size": 63488 00:12:25.246 }, 00:12:25.246 { 00:12:25.246 "name": "BaseBdev3", 00:12:25.246 "uuid": "ff6c3314-687a-4ae2-a68f-3bf5d17b5733", 00:12:25.246 "is_configured": true, 00:12:25.246 "data_offset": 2048, 00:12:25.246 "data_size": 63488 00:12:25.246 } 00:12:25.246 ] 00:12:25.246 }' 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.246 20:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.811 [2024-11-26 20:25:19.097230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.811 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.811 "name": "Existed_Raid", 00:12:25.811 "aliases": [ 00:12:25.811 "b85f5a93-5979-43cc-a03f-2c5e2e675799" 00:12:25.811 ], 00:12:25.811 "product_name": "Raid Volume", 00:12:25.811 "block_size": 512, 00:12:25.811 "num_blocks": 190464, 00:12:25.811 "uuid": "b85f5a93-5979-43cc-a03f-2c5e2e675799", 00:12:25.811 "assigned_rate_limits": { 00:12:25.811 "rw_ios_per_sec": 0, 00:12:25.811 "rw_mbytes_per_sec": 0, 00:12:25.811 "r_mbytes_per_sec": 0, 00:12:25.811 "w_mbytes_per_sec": 0 00:12:25.811 }, 00:12:25.811 "claimed": false, 00:12:25.811 "zoned": false, 00:12:25.811 "supported_io_types": { 00:12:25.811 "read": true, 00:12:25.811 "write": true, 00:12:25.811 "unmap": true, 00:12:25.811 "flush": true, 00:12:25.811 "reset": true, 00:12:25.811 "nvme_admin": false, 00:12:25.811 "nvme_io": false, 00:12:25.811 "nvme_io_md": false, 00:12:25.811 "write_zeroes": true, 00:12:25.811 "zcopy": false, 00:12:25.811 "get_zone_info": false, 00:12:25.811 "zone_management": false, 00:12:25.811 "zone_append": false, 00:12:25.811 "compare": false, 00:12:25.811 "compare_and_write": false, 00:12:25.811 "abort": false, 00:12:25.811 "seek_hole": false, 00:12:25.811 "seek_data": false, 00:12:25.811 "copy": false, 00:12:25.811 "nvme_iov_md": false 00:12:25.811 }, 00:12:25.811 "memory_domains": [ 00:12:25.811 { 00:12:25.811 "dma_device_id": "system", 00:12:25.811 "dma_device_type": 1 00:12:25.811 }, 00:12:25.811 { 00:12:25.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.811 "dma_device_type": 2 00:12:25.811 }, 00:12:25.811 { 00:12:25.811 "dma_device_id": "system", 00:12:25.811 "dma_device_type": 1 00:12:25.811 }, 00:12:25.811 { 00:12:25.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.811 "dma_device_type": 2 00:12:25.811 }, 00:12:25.811 { 00:12:25.811 "dma_device_id": "system", 00:12:25.811 "dma_device_type": 1 00:12:25.811 }, 00:12:25.812 { 00:12:25.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.812 "dma_device_type": 2 00:12:25.812 } 00:12:25.812 ], 00:12:25.812 "driver_specific": { 00:12:25.812 "raid": { 00:12:25.812 "uuid": "b85f5a93-5979-43cc-a03f-2c5e2e675799", 00:12:25.812 "strip_size_kb": 64, 00:12:25.812 "state": "online", 00:12:25.812 "raid_level": "concat", 00:12:25.812 "superblock": true, 00:12:25.812 "num_base_bdevs": 3, 00:12:25.812 "num_base_bdevs_discovered": 3, 00:12:25.812 "num_base_bdevs_operational": 3, 00:12:25.812 "base_bdevs_list": [ 00:12:25.812 { 00:12:25.812 "name": "BaseBdev1", 00:12:25.812 "uuid": "56f42cd6-ef3f-4859-8929-76e8c83ebf69", 00:12:25.812 "is_configured": true, 00:12:25.812 "data_offset": 2048, 00:12:25.812 "data_size": 63488 00:12:25.812 }, 00:12:25.812 { 00:12:25.812 "name": "BaseBdev2", 00:12:25.812 "uuid": "2b2b8c39-6e00-484a-bfc6-f8ab0bfdcc8d", 00:12:25.812 "is_configured": true, 00:12:25.812 "data_offset": 2048, 00:12:25.812 "data_size": 63488 00:12:25.812 }, 00:12:25.812 { 00:12:25.812 "name": "BaseBdev3", 00:12:25.812 "uuid": "ff6c3314-687a-4ae2-a68f-3bf5d17b5733", 00:12:25.812 "is_configured": true, 00:12:25.812 "data_offset": 2048, 00:12:25.812 "data_size": 63488 00:12:25.812 } 00:12:25.812 ] 00:12:25.812 } 00:12:25.812 } 00:12:25.812 }' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:25.812 BaseBdev2 00:12:25.812 BaseBdev3' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.812 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.092 [2024-11-26 20:25:19.388477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.092 [2024-11-26 20:25:19.388556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.092 [2024-11-26 20:25:19.388677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.092 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.093 "name": "Existed_Raid", 00:12:26.093 "uuid": "b85f5a93-5979-43cc-a03f-2c5e2e675799", 00:12:26.093 "strip_size_kb": 64, 00:12:26.093 "state": "offline", 00:12:26.093 "raid_level": "concat", 00:12:26.093 "superblock": true, 00:12:26.093 "num_base_bdevs": 3, 00:12:26.093 "num_base_bdevs_discovered": 2, 00:12:26.093 "num_base_bdevs_operational": 2, 00:12:26.093 "base_bdevs_list": [ 00:12:26.093 { 00:12:26.093 "name": null, 00:12:26.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.093 "is_configured": false, 00:12:26.093 "data_offset": 0, 00:12:26.093 "data_size": 63488 00:12:26.093 }, 00:12:26.093 { 00:12:26.093 "name": "BaseBdev2", 00:12:26.093 "uuid": "2b2b8c39-6e00-484a-bfc6-f8ab0bfdcc8d", 00:12:26.093 "is_configured": true, 00:12:26.093 "data_offset": 2048, 00:12:26.093 "data_size": 63488 00:12:26.093 }, 00:12:26.093 { 00:12:26.093 "name": "BaseBdev3", 00:12:26.093 "uuid": "ff6c3314-687a-4ae2-a68f-3bf5d17b5733", 00:12:26.093 "is_configured": true, 00:12:26.093 "data_offset": 2048, 00:12:26.093 "data_size": 63488 00:12:26.093 } 00:12:26.093 ] 00:12:26.093 }' 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.093 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 20:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 [2024-11-26 20:25:19.988798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 [2024-11-26 20:25:20.152881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:26.660 [2024-11-26 20:25:20.152938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.919 BaseBdev2 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.919 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.920 [ 00:12:26.920 { 00:12:26.920 "name": "BaseBdev2", 00:12:26.920 "aliases": [ 00:12:26.920 "67911c58-4606-4413-a6cd-72fe49b0ccb1" 00:12:26.920 ], 00:12:26.920 "product_name": "Malloc disk", 00:12:26.920 "block_size": 512, 00:12:26.920 "num_blocks": 65536, 00:12:26.920 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:26.920 "assigned_rate_limits": { 00:12:26.920 "rw_ios_per_sec": 0, 00:12:26.920 "rw_mbytes_per_sec": 0, 00:12:26.920 "r_mbytes_per_sec": 0, 00:12:26.920 "w_mbytes_per_sec": 0 00:12:26.920 }, 00:12:26.920 "claimed": false, 00:12:26.920 "zoned": false, 00:12:26.920 "supported_io_types": { 00:12:26.920 "read": true, 00:12:26.920 "write": true, 00:12:26.920 "unmap": true, 00:12:26.920 "flush": true, 00:12:26.920 "reset": true, 00:12:26.920 "nvme_admin": false, 00:12:26.920 "nvme_io": false, 00:12:26.920 "nvme_io_md": false, 00:12:26.920 "write_zeroes": true, 00:12:26.920 "zcopy": true, 00:12:26.920 "get_zone_info": false, 00:12:26.920 "zone_management": false, 00:12:26.920 "zone_append": false, 00:12:26.920 "compare": false, 00:12:26.920 "compare_and_write": false, 00:12:26.920 "abort": true, 00:12:26.920 "seek_hole": false, 00:12:26.920 "seek_data": false, 00:12:26.920 "copy": true, 00:12:26.920 "nvme_iov_md": false 00:12:26.920 }, 00:12:26.920 "memory_domains": [ 00:12:26.920 { 00:12:26.920 "dma_device_id": "system", 00:12:26.920 "dma_device_type": 1 00:12:26.920 }, 00:12:26.920 { 00:12:26.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.920 "dma_device_type": 2 00:12:26.920 } 00:12:26.920 ], 00:12:26.920 "driver_specific": {} 00:12:26.920 } 00:12:26.920 ] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.920 BaseBdev3 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.920 [ 00:12:26.920 { 00:12:26.920 "name": "BaseBdev3", 00:12:26.920 "aliases": [ 00:12:26.920 "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df" 00:12:26.920 ], 00:12:26.920 "product_name": "Malloc disk", 00:12:26.920 "block_size": 512, 00:12:26.920 "num_blocks": 65536, 00:12:26.920 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:26.920 "assigned_rate_limits": { 00:12:26.920 "rw_ios_per_sec": 0, 00:12:26.920 "rw_mbytes_per_sec": 0, 00:12:26.920 "r_mbytes_per_sec": 0, 00:12:26.920 "w_mbytes_per_sec": 0 00:12:26.920 }, 00:12:26.920 "claimed": false, 00:12:26.920 "zoned": false, 00:12:26.920 "supported_io_types": { 00:12:26.920 "read": true, 00:12:26.920 "write": true, 00:12:26.920 "unmap": true, 00:12:26.920 "flush": true, 00:12:26.920 "reset": true, 00:12:26.920 "nvme_admin": false, 00:12:26.920 "nvme_io": false, 00:12:26.920 "nvme_io_md": false, 00:12:26.920 "write_zeroes": true, 00:12:26.920 "zcopy": true, 00:12:26.920 "get_zone_info": false, 00:12:26.920 "zone_management": false, 00:12:26.920 "zone_append": false, 00:12:26.920 "compare": false, 00:12:26.920 "compare_and_write": false, 00:12:26.920 "abort": true, 00:12:26.920 "seek_hole": false, 00:12:26.920 "seek_data": false, 00:12:26.920 "copy": true, 00:12:26.920 "nvme_iov_md": false 00:12:26.920 }, 00:12:26.920 "memory_domains": [ 00:12:26.920 { 00:12:26.920 "dma_device_id": "system", 00:12:26.920 "dma_device_type": 1 00:12:26.920 }, 00:12:26.920 { 00:12:26.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.920 "dma_device_type": 2 00:12:26.920 } 00:12:26.920 ], 00:12:26.920 "driver_specific": {} 00:12:26.920 } 00:12:26.920 ] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.920 [2024-11-26 20:25:20.457023] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.920 [2024-11-26 20:25:20.457126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.920 [2024-11-26 20:25:20.457172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.920 [2024-11-26 20:25:20.459002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.920 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.179 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.179 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.179 "name": "Existed_Raid", 00:12:27.179 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:27.179 "strip_size_kb": 64, 00:12:27.179 "state": "configuring", 00:12:27.179 "raid_level": "concat", 00:12:27.179 "superblock": true, 00:12:27.179 "num_base_bdevs": 3, 00:12:27.179 "num_base_bdevs_discovered": 2, 00:12:27.179 "num_base_bdevs_operational": 3, 00:12:27.179 "base_bdevs_list": [ 00:12:27.179 { 00:12:27.179 "name": "BaseBdev1", 00:12:27.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.179 "is_configured": false, 00:12:27.179 "data_offset": 0, 00:12:27.179 "data_size": 0 00:12:27.179 }, 00:12:27.179 { 00:12:27.179 "name": "BaseBdev2", 00:12:27.179 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:27.179 "is_configured": true, 00:12:27.179 "data_offset": 2048, 00:12:27.179 "data_size": 63488 00:12:27.179 }, 00:12:27.179 { 00:12:27.179 "name": "BaseBdev3", 00:12:27.179 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:27.179 "is_configured": true, 00:12:27.179 "data_offset": 2048, 00:12:27.179 "data_size": 63488 00:12:27.179 } 00:12:27.179 ] 00:12:27.179 }' 00:12:27.179 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.179 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.437 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:27.437 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.438 [2024-11-26 20:25:20.904401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.438 "name": "Existed_Raid", 00:12:27.438 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:27.438 "strip_size_kb": 64, 00:12:27.438 "state": "configuring", 00:12:27.438 "raid_level": "concat", 00:12:27.438 "superblock": true, 00:12:27.438 "num_base_bdevs": 3, 00:12:27.438 "num_base_bdevs_discovered": 1, 00:12:27.438 "num_base_bdevs_operational": 3, 00:12:27.438 "base_bdevs_list": [ 00:12:27.438 { 00:12:27.438 "name": "BaseBdev1", 00:12:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.438 "is_configured": false, 00:12:27.438 "data_offset": 0, 00:12:27.438 "data_size": 0 00:12:27.438 }, 00:12:27.438 { 00:12:27.438 "name": null, 00:12:27.438 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:27.438 "is_configured": false, 00:12:27.438 "data_offset": 0, 00:12:27.438 "data_size": 63488 00:12:27.438 }, 00:12:27.438 { 00:12:27.438 "name": "BaseBdev3", 00:12:27.438 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:27.438 "is_configured": true, 00:12:27.438 "data_offset": 2048, 00:12:27.438 "data_size": 63488 00:12:27.438 } 00:12:27.438 ] 00:12:27.438 }' 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.438 20:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.005 [2024-11-26 20:25:21.450096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.005 BaseBdev1 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.005 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.005 [ 00:12:28.005 { 00:12:28.005 "name": "BaseBdev1", 00:12:28.005 "aliases": [ 00:12:28.005 "c720bca6-ef94-48a3-bda3-f26516523280" 00:12:28.005 ], 00:12:28.005 "product_name": "Malloc disk", 00:12:28.005 "block_size": 512, 00:12:28.005 "num_blocks": 65536, 00:12:28.005 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:28.005 "assigned_rate_limits": { 00:12:28.005 "rw_ios_per_sec": 0, 00:12:28.005 "rw_mbytes_per_sec": 0, 00:12:28.005 "r_mbytes_per_sec": 0, 00:12:28.005 "w_mbytes_per_sec": 0 00:12:28.005 }, 00:12:28.005 "claimed": true, 00:12:28.005 "claim_type": "exclusive_write", 00:12:28.005 "zoned": false, 00:12:28.005 "supported_io_types": { 00:12:28.005 "read": true, 00:12:28.005 "write": true, 00:12:28.005 "unmap": true, 00:12:28.005 "flush": true, 00:12:28.005 "reset": true, 00:12:28.005 "nvme_admin": false, 00:12:28.005 "nvme_io": false, 00:12:28.005 "nvme_io_md": false, 00:12:28.005 "write_zeroes": true, 00:12:28.005 "zcopy": true, 00:12:28.005 "get_zone_info": false, 00:12:28.005 "zone_management": false, 00:12:28.005 "zone_append": false, 00:12:28.005 "compare": false, 00:12:28.005 "compare_and_write": false, 00:12:28.005 "abort": true, 00:12:28.005 "seek_hole": false, 00:12:28.005 "seek_data": false, 00:12:28.005 "copy": true, 00:12:28.005 "nvme_iov_md": false 00:12:28.005 }, 00:12:28.005 "memory_domains": [ 00:12:28.005 { 00:12:28.005 "dma_device_id": "system", 00:12:28.005 "dma_device_type": 1 00:12:28.005 }, 00:12:28.005 { 00:12:28.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.005 "dma_device_type": 2 00:12:28.005 } 00:12:28.005 ], 00:12:28.005 "driver_specific": {} 00:12:28.005 } 00:12:28.005 ] 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.006 "name": "Existed_Raid", 00:12:28.006 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:28.006 "strip_size_kb": 64, 00:12:28.006 "state": "configuring", 00:12:28.006 "raid_level": "concat", 00:12:28.006 "superblock": true, 00:12:28.006 "num_base_bdevs": 3, 00:12:28.006 "num_base_bdevs_discovered": 2, 00:12:28.006 "num_base_bdevs_operational": 3, 00:12:28.006 "base_bdevs_list": [ 00:12:28.006 { 00:12:28.006 "name": "BaseBdev1", 00:12:28.006 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:28.006 "is_configured": true, 00:12:28.006 "data_offset": 2048, 00:12:28.006 "data_size": 63488 00:12:28.006 }, 00:12:28.006 { 00:12:28.006 "name": null, 00:12:28.006 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:28.006 "is_configured": false, 00:12:28.006 "data_offset": 0, 00:12:28.006 "data_size": 63488 00:12:28.006 }, 00:12:28.006 { 00:12:28.006 "name": "BaseBdev3", 00:12:28.006 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:28.006 "is_configured": true, 00:12:28.006 "data_offset": 2048, 00:12:28.006 "data_size": 63488 00:12:28.006 } 00:12:28.006 ] 00:12:28.006 }' 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.006 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.573 [2024-11-26 20:25:21.949337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:28.573 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.574 "name": "Existed_Raid", 00:12:28.574 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:28.574 "strip_size_kb": 64, 00:12:28.574 "state": "configuring", 00:12:28.574 "raid_level": "concat", 00:12:28.574 "superblock": true, 00:12:28.574 "num_base_bdevs": 3, 00:12:28.574 "num_base_bdevs_discovered": 1, 00:12:28.574 "num_base_bdevs_operational": 3, 00:12:28.574 "base_bdevs_list": [ 00:12:28.574 { 00:12:28.574 "name": "BaseBdev1", 00:12:28.574 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:28.574 "is_configured": true, 00:12:28.574 "data_offset": 2048, 00:12:28.574 "data_size": 63488 00:12:28.574 }, 00:12:28.574 { 00:12:28.574 "name": null, 00:12:28.574 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:28.574 "is_configured": false, 00:12:28.574 "data_offset": 0, 00:12:28.574 "data_size": 63488 00:12:28.574 }, 00:12:28.574 { 00:12:28.574 "name": null, 00:12:28.574 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:28.574 "is_configured": false, 00:12:28.574 "data_offset": 0, 00:12:28.574 "data_size": 63488 00:12:28.574 } 00:12:28.574 ] 00:12:28.574 }' 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.574 20:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.832 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:28.832 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.832 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.832 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.832 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.091 [2024-11-26 20:25:22.408680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.091 "name": "Existed_Raid", 00:12:29.091 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:29.091 "strip_size_kb": 64, 00:12:29.091 "state": "configuring", 00:12:29.091 "raid_level": "concat", 00:12:29.091 "superblock": true, 00:12:29.091 "num_base_bdevs": 3, 00:12:29.091 "num_base_bdevs_discovered": 2, 00:12:29.091 "num_base_bdevs_operational": 3, 00:12:29.091 "base_bdevs_list": [ 00:12:29.091 { 00:12:29.091 "name": "BaseBdev1", 00:12:29.091 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:29.091 "is_configured": true, 00:12:29.091 "data_offset": 2048, 00:12:29.091 "data_size": 63488 00:12:29.091 }, 00:12:29.091 { 00:12:29.091 "name": null, 00:12:29.091 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:29.091 "is_configured": false, 00:12:29.091 "data_offset": 0, 00:12:29.091 "data_size": 63488 00:12:29.091 }, 00:12:29.091 { 00:12:29.091 "name": "BaseBdev3", 00:12:29.091 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:29.091 "is_configured": true, 00:12:29.091 "data_offset": 2048, 00:12:29.091 "data_size": 63488 00:12:29.091 } 00:12:29.091 ] 00:12:29.091 }' 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.091 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.350 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.350 [2024-11-26 20:25:22.863909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.649 20:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.649 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.649 "name": "Existed_Raid", 00:12:29.649 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:29.649 "strip_size_kb": 64, 00:12:29.649 "state": "configuring", 00:12:29.649 "raid_level": "concat", 00:12:29.649 "superblock": true, 00:12:29.649 "num_base_bdevs": 3, 00:12:29.649 "num_base_bdevs_discovered": 1, 00:12:29.649 "num_base_bdevs_operational": 3, 00:12:29.649 "base_bdevs_list": [ 00:12:29.649 { 00:12:29.649 "name": null, 00:12:29.649 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:29.649 "is_configured": false, 00:12:29.649 "data_offset": 0, 00:12:29.649 "data_size": 63488 00:12:29.649 }, 00:12:29.649 { 00:12:29.649 "name": null, 00:12:29.649 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:29.649 "is_configured": false, 00:12:29.649 "data_offset": 0, 00:12:29.650 "data_size": 63488 00:12:29.650 }, 00:12:29.650 { 00:12:29.650 "name": "BaseBdev3", 00:12:29.650 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:29.650 "is_configured": true, 00:12:29.650 "data_offset": 2048, 00:12:29.650 "data_size": 63488 00:12:29.650 } 00:12:29.650 ] 00:12:29.650 }' 00:12:29.650 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.650 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.908 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.908 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:29.908 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.908 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.166 [2024-11-26 20:25:23.492298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.166 "name": "Existed_Raid", 00:12:30.166 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:30.166 "strip_size_kb": 64, 00:12:30.166 "state": "configuring", 00:12:30.166 "raid_level": "concat", 00:12:30.166 "superblock": true, 00:12:30.166 "num_base_bdevs": 3, 00:12:30.166 "num_base_bdevs_discovered": 2, 00:12:30.166 "num_base_bdevs_operational": 3, 00:12:30.166 "base_bdevs_list": [ 00:12:30.166 { 00:12:30.166 "name": null, 00:12:30.166 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:30.166 "is_configured": false, 00:12:30.166 "data_offset": 0, 00:12:30.166 "data_size": 63488 00:12:30.166 }, 00:12:30.166 { 00:12:30.166 "name": "BaseBdev2", 00:12:30.166 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:30.166 "is_configured": true, 00:12:30.166 "data_offset": 2048, 00:12:30.166 "data_size": 63488 00:12:30.166 }, 00:12:30.166 { 00:12:30.166 "name": "BaseBdev3", 00:12:30.166 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:30.166 "is_configured": true, 00:12:30.166 "data_offset": 2048, 00:12:30.166 "data_size": 63488 00:12:30.166 } 00:12:30.166 ] 00:12:30.166 }' 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.166 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.426 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.426 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:30.426 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.426 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.426 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.685 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:30.685 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.685 20:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:30.685 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.685 20:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c720bca6-ef94-48a3-bda3-f26516523280 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 [2024-11-26 20:25:24.077317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:30.685 [2024-11-26 20:25:24.077588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.685 [2024-11-26 20:25:24.077607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:30.685 NewBaseBdev 00:12:30.685 [2024-11-26 20:25:24.077897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:30.685 [2024-11-26 20:25:24.078069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.685 [2024-11-26 20:25:24.078082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:30.685 [2024-11-26 20:25:24.078233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.685 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 [ 00:12:30.685 { 00:12:30.685 "name": "NewBaseBdev", 00:12:30.685 "aliases": [ 00:12:30.685 "c720bca6-ef94-48a3-bda3-f26516523280" 00:12:30.685 ], 00:12:30.685 "product_name": "Malloc disk", 00:12:30.685 "block_size": 512, 00:12:30.685 "num_blocks": 65536, 00:12:30.685 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:30.685 "assigned_rate_limits": { 00:12:30.685 "rw_ios_per_sec": 0, 00:12:30.685 "rw_mbytes_per_sec": 0, 00:12:30.685 "r_mbytes_per_sec": 0, 00:12:30.685 "w_mbytes_per_sec": 0 00:12:30.685 }, 00:12:30.685 "claimed": true, 00:12:30.685 "claim_type": "exclusive_write", 00:12:30.685 "zoned": false, 00:12:30.685 "supported_io_types": { 00:12:30.685 "read": true, 00:12:30.685 "write": true, 00:12:30.685 "unmap": true, 00:12:30.685 "flush": true, 00:12:30.685 "reset": true, 00:12:30.685 "nvme_admin": false, 00:12:30.685 "nvme_io": false, 00:12:30.685 "nvme_io_md": false, 00:12:30.685 "write_zeroes": true, 00:12:30.685 "zcopy": true, 00:12:30.685 "get_zone_info": false, 00:12:30.685 "zone_management": false, 00:12:30.685 "zone_append": false, 00:12:30.685 "compare": false, 00:12:30.685 "compare_and_write": false, 00:12:30.685 "abort": true, 00:12:30.685 "seek_hole": false, 00:12:30.685 "seek_data": false, 00:12:30.685 "copy": true, 00:12:30.685 "nvme_iov_md": false 00:12:30.685 }, 00:12:30.685 "memory_domains": [ 00:12:30.685 { 00:12:30.685 "dma_device_id": "system", 00:12:30.685 "dma_device_type": 1 00:12:30.685 }, 00:12:30.685 { 00:12:30.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.686 "dma_device_type": 2 00:12:30.686 } 00:12:30.686 ], 00:12:30.686 "driver_specific": {} 00:12:30.686 } 00:12:30.686 ] 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.686 "name": "Existed_Raid", 00:12:30.686 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:30.686 "strip_size_kb": 64, 00:12:30.686 "state": "online", 00:12:30.686 "raid_level": "concat", 00:12:30.686 "superblock": true, 00:12:30.686 "num_base_bdevs": 3, 00:12:30.686 "num_base_bdevs_discovered": 3, 00:12:30.686 "num_base_bdevs_operational": 3, 00:12:30.686 "base_bdevs_list": [ 00:12:30.686 { 00:12:30.686 "name": "NewBaseBdev", 00:12:30.686 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:30.686 "is_configured": true, 00:12:30.686 "data_offset": 2048, 00:12:30.686 "data_size": 63488 00:12:30.686 }, 00:12:30.686 { 00:12:30.686 "name": "BaseBdev2", 00:12:30.686 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:30.686 "is_configured": true, 00:12:30.686 "data_offset": 2048, 00:12:30.686 "data_size": 63488 00:12:30.686 }, 00:12:30.686 { 00:12:30.686 "name": "BaseBdev3", 00:12:30.686 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:30.686 "is_configured": true, 00:12:30.686 "data_offset": 2048, 00:12:30.686 "data_size": 63488 00:12:30.686 } 00:12:30.686 ] 00:12:30.686 }' 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.686 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.253 [2024-11-26 20:25:24.625049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.253 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.253 "name": "Existed_Raid", 00:12:31.253 "aliases": [ 00:12:31.253 "680bff8a-7e62-453e-998c-0beece8758c6" 00:12:31.253 ], 00:12:31.253 "product_name": "Raid Volume", 00:12:31.253 "block_size": 512, 00:12:31.253 "num_blocks": 190464, 00:12:31.253 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:31.253 "assigned_rate_limits": { 00:12:31.253 "rw_ios_per_sec": 0, 00:12:31.253 "rw_mbytes_per_sec": 0, 00:12:31.253 "r_mbytes_per_sec": 0, 00:12:31.253 "w_mbytes_per_sec": 0 00:12:31.253 }, 00:12:31.253 "claimed": false, 00:12:31.253 "zoned": false, 00:12:31.253 "supported_io_types": { 00:12:31.253 "read": true, 00:12:31.253 "write": true, 00:12:31.253 "unmap": true, 00:12:31.253 "flush": true, 00:12:31.253 "reset": true, 00:12:31.253 "nvme_admin": false, 00:12:31.253 "nvme_io": false, 00:12:31.253 "nvme_io_md": false, 00:12:31.253 "write_zeroes": true, 00:12:31.253 "zcopy": false, 00:12:31.253 "get_zone_info": false, 00:12:31.253 "zone_management": false, 00:12:31.253 "zone_append": false, 00:12:31.253 "compare": false, 00:12:31.253 "compare_and_write": false, 00:12:31.253 "abort": false, 00:12:31.253 "seek_hole": false, 00:12:31.253 "seek_data": false, 00:12:31.253 "copy": false, 00:12:31.253 "nvme_iov_md": false 00:12:31.253 }, 00:12:31.253 "memory_domains": [ 00:12:31.253 { 00:12:31.253 "dma_device_id": "system", 00:12:31.253 "dma_device_type": 1 00:12:31.253 }, 00:12:31.253 { 00:12:31.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.253 "dma_device_type": 2 00:12:31.253 }, 00:12:31.253 { 00:12:31.253 "dma_device_id": "system", 00:12:31.253 "dma_device_type": 1 00:12:31.253 }, 00:12:31.253 { 00:12:31.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.253 "dma_device_type": 2 00:12:31.253 }, 00:12:31.253 { 00:12:31.253 "dma_device_id": "system", 00:12:31.253 "dma_device_type": 1 00:12:31.253 }, 00:12:31.253 { 00:12:31.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.253 "dma_device_type": 2 00:12:31.253 } 00:12:31.253 ], 00:12:31.253 "driver_specific": { 00:12:31.253 "raid": { 00:12:31.254 "uuid": "680bff8a-7e62-453e-998c-0beece8758c6", 00:12:31.254 "strip_size_kb": 64, 00:12:31.254 "state": "online", 00:12:31.254 "raid_level": "concat", 00:12:31.254 "superblock": true, 00:12:31.254 "num_base_bdevs": 3, 00:12:31.254 "num_base_bdevs_discovered": 3, 00:12:31.254 "num_base_bdevs_operational": 3, 00:12:31.254 "base_bdevs_list": [ 00:12:31.254 { 00:12:31.254 "name": "NewBaseBdev", 00:12:31.254 "uuid": "c720bca6-ef94-48a3-bda3-f26516523280", 00:12:31.254 "is_configured": true, 00:12:31.254 "data_offset": 2048, 00:12:31.254 "data_size": 63488 00:12:31.254 }, 00:12:31.254 { 00:12:31.254 "name": "BaseBdev2", 00:12:31.254 "uuid": "67911c58-4606-4413-a6cd-72fe49b0ccb1", 00:12:31.254 "is_configured": true, 00:12:31.254 "data_offset": 2048, 00:12:31.254 "data_size": 63488 00:12:31.254 }, 00:12:31.254 { 00:12:31.254 "name": "BaseBdev3", 00:12:31.254 "uuid": "fbacc7d3-aa65-4d7c-b414-81fa8f5e27df", 00:12:31.254 "is_configured": true, 00:12:31.254 "data_offset": 2048, 00:12:31.254 "data_size": 63488 00:12:31.254 } 00:12:31.254 ] 00:12:31.254 } 00:12:31.254 } 00:12:31.254 }' 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:31.254 BaseBdev2 00:12:31.254 BaseBdev3' 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.254 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.513 [2024-11-26 20:25:24.916210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.513 [2024-11-26 20:25:24.916258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.513 [2024-11-26 20:25:24.916364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.513 [2024-11-26 20:25:24.916429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.513 [2024-11-26 20:25:24.916443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66530 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66530 ']' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66530 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66530 00:12:31.513 killing process with pid 66530 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66530' 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66530 00:12:31.513 [2024-11-26 20:25:24.964631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.513 20:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66530 00:12:31.772 [2024-11-26 20:25:25.293453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.153 20:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:33.153 00:12:33.153 real 0m10.857s 00:12:33.153 user 0m17.146s 00:12:33.153 sys 0m1.814s 00:12:33.153 20:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.153 ************************************ 00:12:33.153 END TEST raid_state_function_test_sb 00:12:33.153 ************************************ 00:12:33.153 20:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.153 20:25:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:33.153 20:25:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.153 20:25:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.153 20:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.153 ************************************ 00:12:33.153 START TEST raid_superblock_test 00:12:33.153 ************************************ 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:33.153 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67156 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67156 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67156 ']' 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.154 20:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.154 [2024-11-26 20:25:26.682567] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:33.154 [2024-11-26 20:25:26.682769] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67156 ] 00:12:33.413 [2024-11-26 20:25:26.858512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.673 [2024-11-26 20:25:26.983472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.673 [2024-11-26 20:25:27.192452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.673 [2024-11-26 20:25:27.192630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.243 malloc1 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.243 [2024-11-26 20:25:27.599126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:34.243 [2024-11-26 20:25:27.599194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.243 [2024-11-26 20:25:27.599218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:34.243 [2024-11-26 20:25:27.599240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.243 [2024-11-26 20:25:27.601722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.243 [2024-11-26 20:25:27.601813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:34.243 pt1 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.243 malloc2 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.243 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.244 [2024-11-26 20:25:27.660121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:34.244 [2024-11-26 20:25:27.660226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.244 [2024-11-26 20:25:27.660286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:34.244 [2024-11-26 20:25:27.660321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.244 [2024-11-26 20:25:27.662611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.244 [2024-11-26 20:25:27.662678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:34.244 pt2 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.244 malloc3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.244 [2024-11-26 20:25:27.731945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:34.244 [2024-11-26 20:25:27.732040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.244 [2024-11-26 20:25:27.732078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:34.244 [2024-11-26 20:25:27.732106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.244 [2024-11-26 20:25:27.734341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.244 [2024-11-26 20:25:27.734410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:34.244 pt3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.244 [2024-11-26 20:25:27.743972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:34.244 [2024-11-26 20:25:27.745938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:34.244 [2024-11-26 20:25:27.746053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:34.244 [2024-11-26 20:25:27.746223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:34.244 [2024-11-26 20:25:27.746253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:34.244 [2024-11-26 20:25:27.746493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:34.244 [2024-11-26 20:25:27.746647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:34.244 [2024-11-26 20:25:27.746661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:34.244 [2024-11-26 20:25:27.746795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.244 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.504 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.504 "name": "raid_bdev1", 00:12:34.504 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:34.504 "strip_size_kb": 64, 00:12:34.504 "state": "online", 00:12:34.504 "raid_level": "concat", 00:12:34.504 "superblock": true, 00:12:34.504 "num_base_bdevs": 3, 00:12:34.504 "num_base_bdevs_discovered": 3, 00:12:34.504 "num_base_bdevs_operational": 3, 00:12:34.504 "base_bdevs_list": [ 00:12:34.504 { 00:12:34.504 "name": "pt1", 00:12:34.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.504 "is_configured": true, 00:12:34.504 "data_offset": 2048, 00:12:34.504 "data_size": 63488 00:12:34.504 }, 00:12:34.504 { 00:12:34.504 "name": "pt2", 00:12:34.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.504 "is_configured": true, 00:12:34.504 "data_offset": 2048, 00:12:34.504 "data_size": 63488 00:12:34.504 }, 00:12:34.504 { 00:12:34.504 "name": "pt3", 00:12:34.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.504 "is_configured": true, 00:12:34.504 "data_offset": 2048, 00:12:34.504 "data_size": 63488 00:12:34.504 } 00:12:34.504 ] 00:12:34.504 }' 00:12:34.504 20:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.504 20:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.765 [2024-11-26 20:25:28.155621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.765 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.765 "name": "raid_bdev1", 00:12:34.765 "aliases": [ 00:12:34.765 "f57df1b0-dbe7-48d5-b849-0082226ef926" 00:12:34.765 ], 00:12:34.765 "product_name": "Raid Volume", 00:12:34.765 "block_size": 512, 00:12:34.765 "num_blocks": 190464, 00:12:34.765 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:34.765 "assigned_rate_limits": { 00:12:34.765 "rw_ios_per_sec": 0, 00:12:34.765 "rw_mbytes_per_sec": 0, 00:12:34.765 "r_mbytes_per_sec": 0, 00:12:34.765 "w_mbytes_per_sec": 0 00:12:34.765 }, 00:12:34.765 "claimed": false, 00:12:34.765 "zoned": false, 00:12:34.765 "supported_io_types": { 00:12:34.765 "read": true, 00:12:34.765 "write": true, 00:12:34.765 "unmap": true, 00:12:34.765 "flush": true, 00:12:34.765 "reset": true, 00:12:34.765 "nvme_admin": false, 00:12:34.765 "nvme_io": false, 00:12:34.765 "nvme_io_md": false, 00:12:34.765 "write_zeroes": true, 00:12:34.765 "zcopy": false, 00:12:34.765 "get_zone_info": false, 00:12:34.765 "zone_management": false, 00:12:34.765 "zone_append": false, 00:12:34.765 "compare": false, 00:12:34.765 "compare_and_write": false, 00:12:34.765 "abort": false, 00:12:34.765 "seek_hole": false, 00:12:34.765 "seek_data": false, 00:12:34.765 "copy": false, 00:12:34.765 "nvme_iov_md": false 00:12:34.765 }, 00:12:34.765 "memory_domains": [ 00:12:34.765 { 00:12:34.765 "dma_device_id": "system", 00:12:34.765 "dma_device_type": 1 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.765 "dma_device_type": 2 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "dma_device_id": "system", 00:12:34.765 "dma_device_type": 1 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.765 "dma_device_type": 2 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "dma_device_id": "system", 00:12:34.765 "dma_device_type": 1 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.765 "dma_device_type": 2 00:12:34.765 } 00:12:34.765 ], 00:12:34.765 "driver_specific": { 00:12:34.765 "raid": { 00:12:34.765 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:34.765 "strip_size_kb": 64, 00:12:34.765 "state": "online", 00:12:34.765 "raid_level": "concat", 00:12:34.765 "superblock": true, 00:12:34.765 "num_base_bdevs": 3, 00:12:34.765 "num_base_bdevs_discovered": 3, 00:12:34.765 "num_base_bdevs_operational": 3, 00:12:34.765 "base_bdevs_list": [ 00:12:34.765 { 00:12:34.765 "name": "pt1", 00:12:34.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.765 "is_configured": true, 00:12:34.765 "data_offset": 2048, 00:12:34.765 "data_size": 63488 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "name": "pt2", 00:12:34.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.765 "is_configured": true, 00:12:34.765 "data_offset": 2048, 00:12:34.765 "data_size": 63488 00:12:34.765 }, 00:12:34.765 { 00:12:34.765 "name": "pt3", 00:12:34.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.765 "is_configured": true, 00:12:34.765 "data_offset": 2048, 00:12:34.765 "data_size": 63488 00:12:34.765 } 00:12:34.766 ] 00:12:34.766 } 00:12:34.766 } 00:12:34.766 }' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:34.766 pt2 00:12:34.766 pt3' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.766 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 [2024-11-26 20:25:28.387183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f57df1b0-dbe7-48d5-b849-0082226ef926 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f57df1b0-dbe7-48d5-b849-0082226ef926 ']' 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.026 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 [2024-11-26 20:25:28.418813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.026 [2024-11-26 20:25:28.418857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.026 [2024-11-26 20:25:28.418945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.026 [2024-11-26 20:25:28.419008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.026 [2024-11-26 20:25:28.419019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 [2024-11-26 20:25:28.554654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:35.027 [2024-11-26 20:25:28.556735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:35.027 [2024-11-26 20:25:28.556866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:35.027 [2024-11-26 20:25:28.556936] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:35.027 [2024-11-26 20:25:28.557000] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:35.027 [2024-11-26 20:25:28.557023] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:35.027 [2024-11-26 20:25:28.557044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.027 [2024-11-26 20:25:28.557055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:35.027 request: 00:12:35.027 { 00:12:35.027 "name": "raid_bdev1", 00:12:35.027 "raid_level": "concat", 00:12:35.027 "base_bdevs": [ 00:12:35.027 "malloc1", 00:12:35.027 "malloc2", 00:12:35.027 "malloc3" 00:12:35.027 ], 00:12:35.027 "strip_size_kb": 64, 00:12:35.027 "superblock": false, 00:12:35.027 "method": "bdev_raid_create", 00:12:35.027 "req_id": 1 00:12:35.027 } 00:12:35.027 Got JSON-RPC error response 00:12:35.027 response: 00:12:35.027 { 00:12:35.027 "code": -17, 00:12:35.027 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:35.027 } 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:35.027 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.288 [2024-11-26 20:25:28.614470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:35.288 [2024-11-26 20:25:28.614573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.288 [2024-11-26 20:25:28.614614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:35.288 [2024-11-26 20:25:28.614646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.288 [2024-11-26 20:25:28.617058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.288 [2024-11-26 20:25:28.617139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:35.288 [2024-11-26 20:25:28.617275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:35.288 [2024-11-26 20:25:28.617366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:35.288 pt1 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.288 "name": "raid_bdev1", 00:12:35.288 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:35.288 "strip_size_kb": 64, 00:12:35.288 "state": "configuring", 00:12:35.288 "raid_level": "concat", 00:12:35.288 "superblock": true, 00:12:35.288 "num_base_bdevs": 3, 00:12:35.288 "num_base_bdevs_discovered": 1, 00:12:35.288 "num_base_bdevs_operational": 3, 00:12:35.288 "base_bdevs_list": [ 00:12:35.288 { 00:12:35.288 "name": "pt1", 00:12:35.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.288 "is_configured": true, 00:12:35.288 "data_offset": 2048, 00:12:35.288 "data_size": 63488 00:12:35.288 }, 00:12:35.288 { 00:12:35.288 "name": null, 00:12:35.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.288 "is_configured": false, 00:12:35.288 "data_offset": 2048, 00:12:35.288 "data_size": 63488 00:12:35.288 }, 00:12:35.288 { 00:12:35.288 "name": null, 00:12:35.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.288 "is_configured": false, 00:12:35.288 "data_offset": 2048, 00:12:35.288 "data_size": 63488 00:12:35.288 } 00:12:35.288 ] 00:12:35.288 }' 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.288 20:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.547 [2024-11-26 20:25:29.085727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:35.547 [2024-11-26 20:25:29.085817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.547 [2024-11-26 20:25:29.085863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:35.547 [2024-11-26 20:25:29.085872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.547 [2024-11-26 20:25:29.086328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.547 [2024-11-26 20:25:29.086346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:35.547 [2024-11-26 20:25:29.086442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:35.547 [2024-11-26 20:25:29.086472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.547 pt2 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.547 [2024-11-26 20:25:29.093697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.547 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.807 "name": "raid_bdev1", 00:12:35.807 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:35.807 "strip_size_kb": 64, 00:12:35.807 "state": "configuring", 00:12:35.807 "raid_level": "concat", 00:12:35.807 "superblock": true, 00:12:35.807 "num_base_bdevs": 3, 00:12:35.807 "num_base_bdevs_discovered": 1, 00:12:35.807 "num_base_bdevs_operational": 3, 00:12:35.807 "base_bdevs_list": [ 00:12:35.807 { 00:12:35.807 "name": "pt1", 00:12:35.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.807 "is_configured": true, 00:12:35.807 "data_offset": 2048, 00:12:35.807 "data_size": 63488 00:12:35.807 }, 00:12:35.807 { 00:12:35.807 "name": null, 00:12:35.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.807 "is_configured": false, 00:12:35.807 "data_offset": 0, 00:12:35.807 "data_size": 63488 00:12:35.807 }, 00:12:35.807 { 00:12:35.807 "name": null, 00:12:35.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.807 "is_configured": false, 00:12:35.807 "data_offset": 2048, 00:12:35.807 "data_size": 63488 00:12:35.807 } 00:12:35.807 ] 00:12:35.807 }' 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.807 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.067 [2024-11-26 20:25:29.528984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:36.067 [2024-11-26 20:25:29.529115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.067 [2024-11-26 20:25:29.529154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:36.067 [2024-11-26 20:25:29.529188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.067 [2024-11-26 20:25:29.529732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.067 [2024-11-26 20:25:29.529802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:36.067 [2024-11-26 20:25:29.529926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:36.067 [2024-11-26 20:25:29.529987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:36.067 pt2 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.067 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.067 [2024-11-26 20:25:29.540938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:36.067 [2024-11-26 20:25:29.540990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.067 [2024-11-26 20:25:29.541005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.067 [2024-11-26 20:25:29.541014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.067 [2024-11-26 20:25:29.541403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.067 [2024-11-26 20:25:29.541426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:36.067 [2024-11-26 20:25:29.541489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:36.067 [2024-11-26 20:25:29.541510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:36.067 [2024-11-26 20:25:29.541644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:36.067 [2024-11-26 20:25:29.541657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:36.067 [2024-11-26 20:25:29.541967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:36.068 [2024-11-26 20:25:29.542133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:36.068 [2024-11-26 20:25:29.542149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:36.068 [2024-11-26 20:25:29.542316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.068 pt3 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.068 "name": "raid_bdev1", 00:12:36.068 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:36.068 "strip_size_kb": 64, 00:12:36.068 "state": "online", 00:12:36.068 "raid_level": "concat", 00:12:36.068 "superblock": true, 00:12:36.068 "num_base_bdevs": 3, 00:12:36.068 "num_base_bdevs_discovered": 3, 00:12:36.068 "num_base_bdevs_operational": 3, 00:12:36.068 "base_bdevs_list": [ 00:12:36.068 { 00:12:36.068 "name": "pt1", 00:12:36.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.068 "is_configured": true, 00:12:36.068 "data_offset": 2048, 00:12:36.068 "data_size": 63488 00:12:36.068 }, 00:12:36.068 { 00:12:36.068 "name": "pt2", 00:12:36.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.068 "is_configured": true, 00:12:36.068 "data_offset": 2048, 00:12:36.068 "data_size": 63488 00:12:36.068 }, 00:12:36.068 { 00:12:36.068 "name": "pt3", 00:12:36.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.068 "is_configured": true, 00:12:36.068 "data_offset": 2048, 00:12:36.068 "data_size": 63488 00:12:36.068 } 00:12:36.068 ] 00:12:36.068 }' 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.068 20:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.638 20:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.638 [2024-11-26 20:25:30.008616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.638 "name": "raid_bdev1", 00:12:36.638 "aliases": [ 00:12:36.638 "f57df1b0-dbe7-48d5-b849-0082226ef926" 00:12:36.638 ], 00:12:36.638 "product_name": "Raid Volume", 00:12:36.638 "block_size": 512, 00:12:36.638 "num_blocks": 190464, 00:12:36.638 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:36.638 "assigned_rate_limits": { 00:12:36.638 "rw_ios_per_sec": 0, 00:12:36.638 "rw_mbytes_per_sec": 0, 00:12:36.638 "r_mbytes_per_sec": 0, 00:12:36.638 "w_mbytes_per_sec": 0 00:12:36.638 }, 00:12:36.638 "claimed": false, 00:12:36.638 "zoned": false, 00:12:36.638 "supported_io_types": { 00:12:36.638 "read": true, 00:12:36.638 "write": true, 00:12:36.638 "unmap": true, 00:12:36.638 "flush": true, 00:12:36.638 "reset": true, 00:12:36.638 "nvme_admin": false, 00:12:36.638 "nvme_io": false, 00:12:36.638 "nvme_io_md": false, 00:12:36.638 "write_zeroes": true, 00:12:36.638 "zcopy": false, 00:12:36.638 "get_zone_info": false, 00:12:36.638 "zone_management": false, 00:12:36.638 "zone_append": false, 00:12:36.638 "compare": false, 00:12:36.638 "compare_and_write": false, 00:12:36.638 "abort": false, 00:12:36.638 "seek_hole": false, 00:12:36.638 "seek_data": false, 00:12:36.638 "copy": false, 00:12:36.638 "nvme_iov_md": false 00:12:36.638 }, 00:12:36.638 "memory_domains": [ 00:12:36.638 { 00:12:36.638 "dma_device_id": "system", 00:12:36.638 "dma_device_type": 1 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.638 "dma_device_type": 2 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "dma_device_id": "system", 00:12:36.638 "dma_device_type": 1 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.638 "dma_device_type": 2 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "dma_device_id": "system", 00:12:36.638 "dma_device_type": 1 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.638 "dma_device_type": 2 00:12:36.638 } 00:12:36.638 ], 00:12:36.638 "driver_specific": { 00:12:36.638 "raid": { 00:12:36.638 "uuid": "f57df1b0-dbe7-48d5-b849-0082226ef926", 00:12:36.638 "strip_size_kb": 64, 00:12:36.638 "state": "online", 00:12:36.638 "raid_level": "concat", 00:12:36.638 "superblock": true, 00:12:36.638 "num_base_bdevs": 3, 00:12:36.638 "num_base_bdevs_discovered": 3, 00:12:36.638 "num_base_bdevs_operational": 3, 00:12:36.638 "base_bdevs_list": [ 00:12:36.638 { 00:12:36.638 "name": "pt1", 00:12:36.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.638 "is_configured": true, 00:12:36.638 "data_offset": 2048, 00:12:36.638 "data_size": 63488 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "name": "pt2", 00:12:36.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.638 "is_configured": true, 00:12:36.638 "data_offset": 2048, 00:12:36.638 "data_size": 63488 00:12:36.638 }, 00:12:36.638 { 00:12:36.638 "name": "pt3", 00:12:36.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.638 "is_configured": true, 00:12:36.638 "data_offset": 2048, 00:12:36.638 "data_size": 63488 00:12:36.638 } 00:12:36.638 ] 00:12:36.638 } 00:12:36.638 } 00:12:36.638 }' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:36.638 pt2 00:12:36.638 pt3' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.638 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.898 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:36.899 [2024-11-26 20:25:30.300038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f57df1b0-dbe7-48d5-b849-0082226ef926 '!=' f57df1b0-dbe7-48d5-b849-0082226ef926 ']' 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67156 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67156 ']' 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67156 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67156 00:12:36.899 killing process with pid 67156 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67156' 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67156 00:12:36.899 [2024-11-26 20:25:30.384095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.899 [2024-11-26 20:25:30.384200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.899 [2024-11-26 20:25:30.384279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.899 [2024-11-26 20:25:30.384292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:36.899 20:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67156 00:12:37.159 [2024-11-26 20:25:30.703875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.539 20:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:38.539 00:12:38.539 real 0m5.289s 00:12:38.539 user 0m7.528s 00:12:38.539 sys 0m0.837s 00:12:38.539 20:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.539 ************************************ 00:12:38.539 END TEST raid_superblock_test 00:12:38.539 ************************************ 00:12:38.539 20:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.539 20:25:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:38.539 20:25:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:38.539 20:25:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.539 20:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.539 ************************************ 00:12:38.539 START TEST raid_read_error_test 00:12:38.539 ************************************ 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.303jmGc0yk 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67409 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67409 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67409 ']' 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.539 20:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.539 [2024-11-26 20:25:32.043538] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:38.539 [2024-11-26 20:25:32.043785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67409 ] 00:12:38.799 [2024-11-26 20:25:32.220736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.799 [2024-11-26 20:25:32.337618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.059 [2024-11-26 20:25:32.536742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.059 [2024-11-26 20:25:32.536808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 BaseBdev1_malloc 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 true 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 [2024-11-26 20:25:33.000018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:39.684 [2024-11-26 20:25:33.000154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.684 [2024-11-26 20:25:33.000182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:39.684 [2024-11-26 20:25:33.000193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.684 [2024-11-26 20:25:33.002763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.684 [2024-11-26 20:25:33.002812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.684 BaseBdev1 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 BaseBdev2_malloc 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 true 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 [2024-11-26 20:25:33.066158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:39.684 [2024-11-26 20:25:33.066269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.684 [2024-11-26 20:25:33.066305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:39.684 [2024-11-26 20:25:33.066335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.684 [2024-11-26 20:25:33.068494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.684 [2024-11-26 20:25:33.068578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:39.684 BaseBdev2 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 BaseBdev3_malloc 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 true 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.684 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 [2024-11-26 20:25:33.146009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:39.684 [2024-11-26 20:25:33.146152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.684 [2024-11-26 20:25:33.146180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:39.684 [2024-11-26 20:25:33.146193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.685 [2024-11-26 20:25:33.148811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.685 [2024-11-26 20:25:33.148859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:39.685 BaseBdev3 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.685 [2024-11-26 20:25:33.158105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.685 [2024-11-26 20:25:33.160234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.685 [2024-11-26 20:25:33.160352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.685 [2024-11-26 20:25:33.160610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.685 [2024-11-26 20:25:33.160632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:39.685 [2024-11-26 20:25:33.160956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:39.685 [2024-11-26 20:25:33.161153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.685 [2024-11-26 20:25:33.161169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:39.685 [2024-11-26 20:25:33.161384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.685 "name": "raid_bdev1", 00:12:39.685 "uuid": "39013a78-4cf5-4e41-94f2-7c841a950637", 00:12:39.685 "strip_size_kb": 64, 00:12:39.685 "state": "online", 00:12:39.685 "raid_level": "concat", 00:12:39.685 "superblock": true, 00:12:39.685 "num_base_bdevs": 3, 00:12:39.685 "num_base_bdevs_discovered": 3, 00:12:39.685 "num_base_bdevs_operational": 3, 00:12:39.685 "base_bdevs_list": [ 00:12:39.685 { 00:12:39.685 "name": "BaseBdev1", 00:12:39.685 "uuid": "ec89ed5d-546e-589c-ae52-3ddeb622c0a1", 00:12:39.685 "is_configured": true, 00:12:39.685 "data_offset": 2048, 00:12:39.685 "data_size": 63488 00:12:39.685 }, 00:12:39.685 { 00:12:39.685 "name": "BaseBdev2", 00:12:39.685 "uuid": "fcd1e07c-228b-5d13-a822-fb3955555b1e", 00:12:39.685 "is_configured": true, 00:12:39.685 "data_offset": 2048, 00:12:39.685 "data_size": 63488 00:12:39.685 }, 00:12:39.685 { 00:12:39.685 "name": "BaseBdev3", 00:12:39.685 "uuid": "6cdbaffd-90a3-5402-b282-c3cae6136e2c", 00:12:39.685 "is_configured": true, 00:12:39.685 "data_offset": 2048, 00:12:39.685 "data_size": 63488 00:12:39.685 } 00:12:39.685 ] 00:12:39.685 }' 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.685 20:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.284 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:40.284 20:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:40.284 [2024-11-26 20:25:33.678679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.222 "name": "raid_bdev1", 00:12:41.222 "uuid": "39013a78-4cf5-4e41-94f2-7c841a950637", 00:12:41.222 "strip_size_kb": 64, 00:12:41.222 "state": "online", 00:12:41.222 "raid_level": "concat", 00:12:41.222 "superblock": true, 00:12:41.222 "num_base_bdevs": 3, 00:12:41.222 "num_base_bdevs_discovered": 3, 00:12:41.222 "num_base_bdevs_operational": 3, 00:12:41.222 "base_bdevs_list": [ 00:12:41.222 { 00:12:41.222 "name": "BaseBdev1", 00:12:41.222 "uuid": "ec89ed5d-546e-589c-ae52-3ddeb622c0a1", 00:12:41.222 "is_configured": true, 00:12:41.222 "data_offset": 2048, 00:12:41.222 "data_size": 63488 00:12:41.222 }, 00:12:41.222 { 00:12:41.222 "name": "BaseBdev2", 00:12:41.222 "uuid": "fcd1e07c-228b-5d13-a822-fb3955555b1e", 00:12:41.222 "is_configured": true, 00:12:41.222 "data_offset": 2048, 00:12:41.222 "data_size": 63488 00:12:41.222 }, 00:12:41.222 { 00:12:41.222 "name": "BaseBdev3", 00:12:41.222 "uuid": "6cdbaffd-90a3-5402-b282-c3cae6136e2c", 00:12:41.222 "is_configured": true, 00:12:41.222 "data_offset": 2048, 00:12:41.222 "data_size": 63488 00:12:41.222 } 00:12:41.222 ] 00:12:41.222 }' 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.222 20:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.788 [2024-11-26 20:25:35.047429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.788 [2024-11-26 20:25:35.047529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.788 [2024-11-26 20:25:35.050556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.788 [2024-11-26 20:25:35.050644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.788 [2024-11-26 20:25:35.050702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.788 [2024-11-26 20:25:35.050742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:41.788 { 00:12:41.788 "results": [ 00:12:41.788 { 00:12:41.788 "job": "raid_bdev1", 00:12:41.788 "core_mask": "0x1", 00:12:41.788 "workload": "randrw", 00:12:41.788 "percentage": 50, 00:12:41.788 "status": "finished", 00:12:41.788 "queue_depth": 1, 00:12:41.788 "io_size": 131072, 00:12:41.788 "runtime": 1.369196, 00:12:41.788 "iops": 14689.642680814142, 00:12:41.788 "mibps": 1836.2053351017678, 00:12:41.788 "io_failed": 1, 00:12:41.788 "io_timeout": 0, 00:12:41.788 "avg_latency_us": 94.3380946943036, 00:12:41.788 "min_latency_us": 27.72401746724891, 00:12:41.788 "max_latency_us": 1488.1537117903931 00:12:41.788 } 00:12:41.788 ], 00:12:41.788 "core_count": 1 00:12:41.788 } 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67409 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67409 ']' 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67409 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67409 00:12:41.788 killing process with pid 67409 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67409' 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67409 00:12:41.788 [2024-11-26 20:25:35.091126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.788 20:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67409 00:12:42.047 [2024-11-26 20:25:35.342511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.303jmGc0yk 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:43.427 00:12:43.427 real 0m4.682s 00:12:43.427 user 0m5.567s 00:12:43.427 sys 0m0.525s 00:12:43.427 ************************************ 00:12:43.427 END TEST raid_read_error_test 00:12:43.427 ************************************ 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.427 20:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.427 20:25:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:12:43.427 20:25:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:43.427 20:25:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.427 20:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.427 ************************************ 00:12:43.427 START TEST raid_write_error_test 00:12:43.427 ************************************ 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zTXhjyZ3iQ 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67560 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67560 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67560 ']' 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.427 20:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.427 [2024-11-26 20:25:36.803839] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:43.427 [2024-11-26 20:25:36.803961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67560 ] 00:12:43.427 [2024-11-26 20:25:36.978559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.687 [2024-11-26 20:25:37.104571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.946 [2024-11-26 20:25:37.330763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.946 [2024-11-26 20:25:37.330832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.205 BaseBdev1_malloc 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.205 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.206 true 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.206 [2024-11-26 20:25:37.750063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.206 [2024-11-26 20:25:37.750132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.206 [2024-11-26 20:25:37.750156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.206 [2024-11-26 20:25:37.750168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.206 [2024-11-26 20:25:37.752577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.206 [2024-11-26 20:25:37.752625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.206 BaseBdev1 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.206 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 BaseBdev2_malloc 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 true 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 [2024-11-26 20:25:37.820666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.465 [2024-11-26 20:25:37.820740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.465 [2024-11-26 20:25:37.820762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.465 [2024-11-26 20:25:37.820775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.465 [2024-11-26 20:25:37.823188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.465 [2024-11-26 20:25:37.823233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.465 BaseBdev2 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 BaseBdev3_malloc 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 true 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 [2024-11-26 20:25:37.904740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.465 [2024-11-26 20:25:37.904799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.465 [2024-11-26 20:25:37.904821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.465 [2024-11-26 20:25:37.904833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.465 [2024-11-26 20:25:37.907138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.465 [2024-11-26 20:25:37.907264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.465 BaseBdev3 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 [2024-11-26 20:25:37.916806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.465 [2024-11-26 20:25:37.918799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.465 [2024-11-26 20:25:37.918889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.465 [2024-11-26 20:25:37.919091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:44.465 [2024-11-26 20:25:37.919103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:44.465 [2024-11-26 20:25:37.919372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:44.465 [2024-11-26 20:25:37.919573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:44.465 [2024-11-26 20:25:37.919588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:44.465 [2024-11-26 20:25:37.919748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.465 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.465 "name": "raid_bdev1", 00:12:44.465 "uuid": "fb7b0246-96f1-4aee-84ce-332a12c25fbb", 00:12:44.465 "strip_size_kb": 64, 00:12:44.465 "state": "online", 00:12:44.465 "raid_level": "concat", 00:12:44.465 "superblock": true, 00:12:44.465 "num_base_bdevs": 3, 00:12:44.465 "num_base_bdevs_discovered": 3, 00:12:44.465 "num_base_bdevs_operational": 3, 00:12:44.465 "base_bdevs_list": [ 00:12:44.465 { 00:12:44.465 "name": "BaseBdev1", 00:12:44.465 "uuid": "dee92582-4ce1-5525-b35d-bda7b81ef0c1", 00:12:44.465 "is_configured": true, 00:12:44.465 "data_offset": 2048, 00:12:44.465 "data_size": 63488 00:12:44.465 }, 00:12:44.466 { 00:12:44.466 "name": "BaseBdev2", 00:12:44.466 "uuid": "e61b3ad8-a9f3-558c-8741-9789188a864d", 00:12:44.466 "is_configured": true, 00:12:44.466 "data_offset": 2048, 00:12:44.466 "data_size": 63488 00:12:44.466 }, 00:12:44.466 { 00:12:44.466 "name": "BaseBdev3", 00:12:44.466 "uuid": "38b67e17-ffd1-5c06-adb6-23aa674a05ff", 00:12:44.466 "is_configured": true, 00:12:44.466 "data_offset": 2048, 00:12:44.466 "data_size": 63488 00:12:44.466 } 00:12:44.466 ] 00:12:44.466 }' 00:12:44.466 20:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.466 20:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.033 20:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:45.033 20:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.033 [2024-11-26 20:25:38.469221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.971 "name": "raid_bdev1", 00:12:45.971 "uuid": "fb7b0246-96f1-4aee-84ce-332a12c25fbb", 00:12:45.971 "strip_size_kb": 64, 00:12:45.971 "state": "online", 00:12:45.971 "raid_level": "concat", 00:12:45.971 "superblock": true, 00:12:45.971 "num_base_bdevs": 3, 00:12:45.971 "num_base_bdevs_discovered": 3, 00:12:45.971 "num_base_bdevs_operational": 3, 00:12:45.971 "base_bdevs_list": [ 00:12:45.971 { 00:12:45.971 "name": "BaseBdev1", 00:12:45.971 "uuid": "dee92582-4ce1-5525-b35d-bda7b81ef0c1", 00:12:45.971 "is_configured": true, 00:12:45.971 "data_offset": 2048, 00:12:45.971 "data_size": 63488 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "name": "BaseBdev2", 00:12:45.971 "uuid": "e61b3ad8-a9f3-558c-8741-9789188a864d", 00:12:45.971 "is_configured": true, 00:12:45.971 "data_offset": 2048, 00:12:45.971 "data_size": 63488 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "name": "BaseBdev3", 00:12:45.971 "uuid": "38b67e17-ffd1-5c06-adb6-23aa674a05ff", 00:12:45.971 "is_configured": true, 00:12:45.971 "data_offset": 2048, 00:12:45.971 "data_size": 63488 00:12:45.971 } 00:12:45.971 ] 00:12:45.971 }' 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.971 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.539 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.539 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.539 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.539 [2024-11-26 20:25:39.813518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.539 [2024-11-26 20:25:39.813555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.539 [2024-11-26 20:25:39.816742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.539 [2024-11-26 20:25:39.816903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.539 [2024-11-26 20:25:39.816962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.539 [2024-11-26 20:25:39.816978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:46.539 { 00:12:46.539 "results": [ 00:12:46.539 { 00:12:46.539 "job": "raid_bdev1", 00:12:46.539 "core_mask": "0x1", 00:12:46.540 "workload": "randrw", 00:12:46.540 "percentage": 50, 00:12:46.540 "status": "finished", 00:12:46.540 "queue_depth": 1, 00:12:46.540 "io_size": 131072, 00:12:46.540 "runtime": 1.34485, 00:12:46.540 "iops": 14350.299289883631, 00:12:46.540 "mibps": 1793.7874112354539, 00:12:46.540 "io_failed": 1, 00:12:46.540 "io_timeout": 0, 00:12:46.540 "avg_latency_us": 96.59045889992534, 00:12:46.540 "min_latency_us": 27.165065502183406, 00:12:46.540 "max_latency_us": 1702.7912663755458 00:12:46.540 } 00:12:46.540 ], 00:12:46.540 "core_count": 1 00:12:46.540 } 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67560 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67560 ']' 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67560 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67560 00:12:46.540 killing process with pid 67560 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67560' 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67560 00:12:46.540 [2024-11-26 20:25:39.854739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.540 20:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67560 00:12:46.799 [2024-11-26 20:25:40.096779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zTXhjyZ3iQ 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:48.180 00:12:48.180 real 0m4.686s 00:12:48.180 user 0m5.581s 00:12:48.180 sys 0m0.550s 00:12:48.180 ************************************ 00:12:48.180 END TEST raid_write_error_test 00:12:48.180 ************************************ 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.180 20:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.180 20:25:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:48.180 20:25:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:48.180 20:25:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.180 20:25:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.180 20:25:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.180 ************************************ 00:12:48.180 START TEST raid_state_function_test 00:12:48.180 ************************************ 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67704 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67704' 00:12:48.180 Process raid pid: 67704 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67704 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67704 ']' 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.180 20:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.180 [2024-11-26 20:25:41.533767] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:48.180 [2024-11-26 20:25:41.534000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.180 [2024-11-26 20:25:41.709556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.441 [2024-11-26 20:25:41.835345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.701 [2024-11-26 20:25:42.055523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.701 [2024-11-26 20:25:42.055671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.960 [2024-11-26 20:25:42.375205] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.960 [2024-11-26 20:25:42.375273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.960 [2024-11-26 20:25:42.375286] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.960 [2024-11-26 20:25:42.375297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.960 [2024-11-26 20:25:42.375305] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.960 [2024-11-26 20:25:42.375314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.960 "name": "Existed_Raid", 00:12:48.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.960 "strip_size_kb": 0, 00:12:48.960 "state": "configuring", 00:12:48.960 "raid_level": "raid1", 00:12:48.960 "superblock": false, 00:12:48.960 "num_base_bdevs": 3, 00:12:48.960 "num_base_bdevs_discovered": 0, 00:12:48.960 "num_base_bdevs_operational": 3, 00:12:48.960 "base_bdevs_list": [ 00:12:48.960 { 00:12:48.960 "name": "BaseBdev1", 00:12:48.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.960 "is_configured": false, 00:12:48.960 "data_offset": 0, 00:12:48.960 "data_size": 0 00:12:48.960 }, 00:12:48.960 { 00:12:48.960 "name": "BaseBdev2", 00:12:48.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.960 "is_configured": false, 00:12:48.960 "data_offset": 0, 00:12:48.960 "data_size": 0 00:12:48.960 }, 00:12:48.960 { 00:12:48.960 "name": "BaseBdev3", 00:12:48.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.960 "is_configured": false, 00:12:48.960 "data_offset": 0, 00:12:48.960 "data_size": 0 00:12:48.960 } 00:12:48.960 ] 00:12:48.960 }' 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.960 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 [2024-11-26 20:25:42.842342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.529 [2024-11-26 20:25:42.842453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 [2024-11-26 20:25:42.854320] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.529 [2024-11-26 20:25:42.854367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.529 [2024-11-26 20:25:42.854377] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.529 [2024-11-26 20:25:42.854388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.529 [2024-11-26 20:25:42.854395] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.529 [2024-11-26 20:25:42.854405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 [2024-11-26 20:25:42.905092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.529 BaseBdev1 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 [ 00:12:49.529 { 00:12:49.529 "name": "BaseBdev1", 00:12:49.529 "aliases": [ 00:12:49.529 "4796c13f-ad72-4354-a4ae-f9993a69d0c1" 00:12:49.529 ], 00:12:49.529 "product_name": "Malloc disk", 00:12:49.529 "block_size": 512, 00:12:49.529 "num_blocks": 65536, 00:12:49.529 "uuid": "4796c13f-ad72-4354-a4ae-f9993a69d0c1", 00:12:49.529 "assigned_rate_limits": { 00:12:49.529 "rw_ios_per_sec": 0, 00:12:49.529 "rw_mbytes_per_sec": 0, 00:12:49.529 "r_mbytes_per_sec": 0, 00:12:49.529 "w_mbytes_per_sec": 0 00:12:49.529 }, 00:12:49.529 "claimed": true, 00:12:49.529 "claim_type": "exclusive_write", 00:12:49.529 "zoned": false, 00:12:49.529 "supported_io_types": { 00:12:49.529 "read": true, 00:12:49.529 "write": true, 00:12:49.529 "unmap": true, 00:12:49.529 "flush": true, 00:12:49.529 "reset": true, 00:12:49.529 "nvme_admin": false, 00:12:49.529 "nvme_io": false, 00:12:49.529 "nvme_io_md": false, 00:12:49.529 "write_zeroes": true, 00:12:49.529 "zcopy": true, 00:12:49.529 "get_zone_info": false, 00:12:49.529 "zone_management": false, 00:12:49.529 "zone_append": false, 00:12:49.529 "compare": false, 00:12:49.529 "compare_and_write": false, 00:12:49.529 "abort": true, 00:12:49.529 "seek_hole": false, 00:12:49.529 "seek_data": false, 00:12:49.529 "copy": true, 00:12:49.529 "nvme_iov_md": false 00:12:49.529 }, 00:12:49.529 "memory_domains": [ 00:12:49.529 { 00:12:49.529 "dma_device_id": "system", 00:12:49.529 "dma_device_type": 1 00:12:49.529 }, 00:12:49.529 { 00:12:49.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.529 "dma_device_type": 2 00:12:49.529 } 00:12:49.529 ], 00:12:49.529 "driver_specific": {} 00:12:49.529 } 00:12:49.529 ] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.529 "name": "Existed_Raid", 00:12:49.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.529 "strip_size_kb": 0, 00:12:49.529 "state": "configuring", 00:12:49.529 "raid_level": "raid1", 00:12:49.529 "superblock": false, 00:12:49.529 "num_base_bdevs": 3, 00:12:49.529 "num_base_bdevs_discovered": 1, 00:12:49.529 "num_base_bdevs_operational": 3, 00:12:49.529 "base_bdevs_list": [ 00:12:49.529 { 00:12:49.529 "name": "BaseBdev1", 00:12:49.529 "uuid": "4796c13f-ad72-4354-a4ae-f9993a69d0c1", 00:12:49.529 "is_configured": true, 00:12:49.529 "data_offset": 0, 00:12:49.529 "data_size": 65536 00:12:49.529 }, 00:12:49.529 { 00:12:49.529 "name": "BaseBdev2", 00:12:49.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.529 "is_configured": false, 00:12:49.529 "data_offset": 0, 00:12:49.529 "data_size": 0 00:12:49.529 }, 00:12:49.529 { 00:12:49.529 "name": "BaseBdev3", 00:12:49.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.529 "is_configured": false, 00:12:49.529 "data_offset": 0, 00:12:49.529 "data_size": 0 00:12:49.529 } 00:12:49.529 ] 00:12:49.529 }' 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.529 20:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.099 [2024-11-26 20:25:43.420393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.099 [2024-11-26 20:25:43.420513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.099 [2024-11-26 20:25:43.432458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.099 [2024-11-26 20:25:43.434628] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.099 [2024-11-26 20:25:43.434725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.099 [2024-11-26 20:25:43.434762] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.099 [2024-11-26 20:25:43.434791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.099 "name": "Existed_Raid", 00:12:50.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.099 "strip_size_kb": 0, 00:12:50.099 "state": "configuring", 00:12:50.099 "raid_level": "raid1", 00:12:50.099 "superblock": false, 00:12:50.099 "num_base_bdevs": 3, 00:12:50.099 "num_base_bdevs_discovered": 1, 00:12:50.099 "num_base_bdevs_operational": 3, 00:12:50.099 "base_bdevs_list": [ 00:12:50.099 { 00:12:50.099 "name": "BaseBdev1", 00:12:50.099 "uuid": "4796c13f-ad72-4354-a4ae-f9993a69d0c1", 00:12:50.099 "is_configured": true, 00:12:50.099 "data_offset": 0, 00:12:50.099 "data_size": 65536 00:12:50.099 }, 00:12:50.099 { 00:12:50.099 "name": "BaseBdev2", 00:12:50.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.099 "is_configured": false, 00:12:50.099 "data_offset": 0, 00:12:50.099 "data_size": 0 00:12:50.099 }, 00:12:50.099 { 00:12:50.099 "name": "BaseBdev3", 00:12:50.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.099 "is_configured": false, 00:12:50.099 "data_offset": 0, 00:12:50.099 "data_size": 0 00:12:50.099 } 00:12:50.099 ] 00:12:50.099 }' 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.099 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.359 [2024-11-26 20:25:43.873934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.359 BaseBdev2 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.359 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.359 [ 00:12:50.359 { 00:12:50.359 "name": "BaseBdev2", 00:12:50.359 "aliases": [ 00:12:50.359 "63472350-ca96-420f-8734-cc05f1769785" 00:12:50.359 ], 00:12:50.359 "product_name": "Malloc disk", 00:12:50.359 "block_size": 512, 00:12:50.359 "num_blocks": 65536, 00:12:50.359 "uuid": "63472350-ca96-420f-8734-cc05f1769785", 00:12:50.359 "assigned_rate_limits": { 00:12:50.359 "rw_ios_per_sec": 0, 00:12:50.359 "rw_mbytes_per_sec": 0, 00:12:50.359 "r_mbytes_per_sec": 0, 00:12:50.359 "w_mbytes_per_sec": 0 00:12:50.359 }, 00:12:50.359 "claimed": true, 00:12:50.359 "claim_type": "exclusive_write", 00:12:50.359 "zoned": false, 00:12:50.359 "supported_io_types": { 00:12:50.359 "read": true, 00:12:50.359 "write": true, 00:12:50.359 "unmap": true, 00:12:50.359 "flush": true, 00:12:50.359 "reset": true, 00:12:50.359 "nvme_admin": false, 00:12:50.359 "nvme_io": false, 00:12:50.359 "nvme_io_md": false, 00:12:50.359 "write_zeroes": true, 00:12:50.359 "zcopy": true, 00:12:50.359 "get_zone_info": false, 00:12:50.359 "zone_management": false, 00:12:50.359 "zone_append": false, 00:12:50.359 "compare": false, 00:12:50.359 "compare_and_write": false, 00:12:50.359 "abort": true, 00:12:50.359 "seek_hole": false, 00:12:50.359 "seek_data": false, 00:12:50.359 "copy": true, 00:12:50.359 "nvme_iov_md": false 00:12:50.359 }, 00:12:50.359 "memory_domains": [ 00:12:50.359 { 00:12:50.359 "dma_device_id": "system", 00:12:50.359 "dma_device_type": 1 00:12:50.359 }, 00:12:50.359 { 00:12:50.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.359 "dma_device_type": 2 00:12:50.359 } 00:12:50.620 ], 00:12:50.620 "driver_specific": {} 00:12:50.620 } 00:12:50.620 ] 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.620 "name": "Existed_Raid", 00:12:50.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.620 "strip_size_kb": 0, 00:12:50.620 "state": "configuring", 00:12:50.620 "raid_level": "raid1", 00:12:50.620 "superblock": false, 00:12:50.620 "num_base_bdevs": 3, 00:12:50.620 "num_base_bdevs_discovered": 2, 00:12:50.620 "num_base_bdevs_operational": 3, 00:12:50.620 "base_bdevs_list": [ 00:12:50.620 { 00:12:50.620 "name": "BaseBdev1", 00:12:50.620 "uuid": "4796c13f-ad72-4354-a4ae-f9993a69d0c1", 00:12:50.620 "is_configured": true, 00:12:50.620 "data_offset": 0, 00:12:50.620 "data_size": 65536 00:12:50.620 }, 00:12:50.620 { 00:12:50.620 "name": "BaseBdev2", 00:12:50.620 "uuid": "63472350-ca96-420f-8734-cc05f1769785", 00:12:50.620 "is_configured": true, 00:12:50.620 "data_offset": 0, 00:12:50.620 "data_size": 65536 00:12:50.620 }, 00:12:50.620 { 00:12:50.620 "name": "BaseBdev3", 00:12:50.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.620 "is_configured": false, 00:12:50.620 "data_offset": 0, 00:12:50.620 "data_size": 0 00:12:50.620 } 00:12:50.620 ] 00:12:50.620 }' 00:12:50.620 20:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.621 20:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.882 [2024-11-26 20:25:44.373714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.882 [2024-11-26 20:25:44.373854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:50.882 [2024-11-26 20:25:44.373889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:50.882 [2024-11-26 20:25:44.374222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:50.882 [2024-11-26 20:25:44.374502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:50.882 [2024-11-26 20:25:44.374551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:50.882 [2024-11-26 20:25:44.374921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.882 BaseBdev3 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.882 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.882 [ 00:12:50.882 { 00:12:50.882 "name": "BaseBdev3", 00:12:50.882 "aliases": [ 00:12:50.882 "7d9112b9-b1c0-4871-8255-25e018f7abe4" 00:12:50.882 ], 00:12:50.882 "product_name": "Malloc disk", 00:12:50.882 "block_size": 512, 00:12:50.882 "num_blocks": 65536, 00:12:50.882 "uuid": "7d9112b9-b1c0-4871-8255-25e018f7abe4", 00:12:50.882 "assigned_rate_limits": { 00:12:50.882 "rw_ios_per_sec": 0, 00:12:50.882 "rw_mbytes_per_sec": 0, 00:12:50.882 "r_mbytes_per_sec": 0, 00:12:50.882 "w_mbytes_per_sec": 0 00:12:50.882 }, 00:12:50.882 "claimed": true, 00:12:50.882 "claim_type": "exclusive_write", 00:12:50.882 "zoned": false, 00:12:50.882 "supported_io_types": { 00:12:50.882 "read": true, 00:12:50.882 "write": true, 00:12:50.882 "unmap": true, 00:12:50.882 "flush": true, 00:12:50.882 "reset": true, 00:12:50.882 "nvme_admin": false, 00:12:50.882 "nvme_io": false, 00:12:50.882 "nvme_io_md": false, 00:12:50.882 "write_zeroes": true, 00:12:50.882 "zcopy": true, 00:12:50.882 "get_zone_info": false, 00:12:50.882 "zone_management": false, 00:12:50.882 "zone_append": false, 00:12:50.883 "compare": false, 00:12:50.883 "compare_and_write": false, 00:12:50.883 "abort": true, 00:12:50.883 "seek_hole": false, 00:12:50.883 "seek_data": false, 00:12:50.883 "copy": true, 00:12:50.883 "nvme_iov_md": false 00:12:50.883 }, 00:12:50.883 "memory_domains": [ 00:12:50.883 { 00:12:50.883 "dma_device_id": "system", 00:12:50.883 "dma_device_type": 1 00:12:50.883 }, 00:12:50.883 { 00:12:50.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.883 "dma_device_type": 2 00:12:50.883 } 00:12:50.883 ], 00:12:50.883 "driver_specific": {} 00:12:50.883 } 00:12:50.883 ] 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.883 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.143 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.143 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.143 "name": "Existed_Raid", 00:12:51.143 "uuid": "70bf02cc-5f25-4b6c-9d7c-9e16db861fc5", 00:12:51.143 "strip_size_kb": 0, 00:12:51.143 "state": "online", 00:12:51.143 "raid_level": "raid1", 00:12:51.143 "superblock": false, 00:12:51.143 "num_base_bdevs": 3, 00:12:51.143 "num_base_bdevs_discovered": 3, 00:12:51.143 "num_base_bdevs_operational": 3, 00:12:51.143 "base_bdevs_list": [ 00:12:51.143 { 00:12:51.143 "name": "BaseBdev1", 00:12:51.143 "uuid": "4796c13f-ad72-4354-a4ae-f9993a69d0c1", 00:12:51.143 "is_configured": true, 00:12:51.143 "data_offset": 0, 00:12:51.143 "data_size": 65536 00:12:51.143 }, 00:12:51.143 { 00:12:51.143 "name": "BaseBdev2", 00:12:51.143 "uuid": "63472350-ca96-420f-8734-cc05f1769785", 00:12:51.143 "is_configured": true, 00:12:51.143 "data_offset": 0, 00:12:51.143 "data_size": 65536 00:12:51.143 }, 00:12:51.143 { 00:12:51.143 "name": "BaseBdev3", 00:12:51.143 "uuid": "7d9112b9-b1c0-4871-8255-25e018f7abe4", 00:12:51.143 "is_configured": true, 00:12:51.143 "data_offset": 0, 00:12:51.143 "data_size": 65536 00:12:51.143 } 00:12:51.143 ] 00:12:51.143 }' 00:12:51.143 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.143 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 [2024-11-26 20:25:44.865322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.403 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.403 "name": "Existed_Raid", 00:12:51.403 "aliases": [ 00:12:51.403 "70bf02cc-5f25-4b6c-9d7c-9e16db861fc5" 00:12:51.403 ], 00:12:51.403 "product_name": "Raid Volume", 00:12:51.403 "block_size": 512, 00:12:51.403 "num_blocks": 65536, 00:12:51.403 "uuid": "70bf02cc-5f25-4b6c-9d7c-9e16db861fc5", 00:12:51.403 "assigned_rate_limits": { 00:12:51.403 "rw_ios_per_sec": 0, 00:12:51.403 "rw_mbytes_per_sec": 0, 00:12:51.403 "r_mbytes_per_sec": 0, 00:12:51.403 "w_mbytes_per_sec": 0 00:12:51.403 }, 00:12:51.403 "claimed": false, 00:12:51.403 "zoned": false, 00:12:51.403 "supported_io_types": { 00:12:51.404 "read": true, 00:12:51.404 "write": true, 00:12:51.404 "unmap": false, 00:12:51.404 "flush": false, 00:12:51.404 "reset": true, 00:12:51.404 "nvme_admin": false, 00:12:51.404 "nvme_io": false, 00:12:51.404 "nvme_io_md": false, 00:12:51.404 "write_zeroes": true, 00:12:51.404 "zcopy": false, 00:12:51.404 "get_zone_info": false, 00:12:51.404 "zone_management": false, 00:12:51.404 "zone_append": false, 00:12:51.404 "compare": false, 00:12:51.404 "compare_and_write": false, 00:12:51.404 "abort": false, 00:12:51.404 "seek_hole": false, 00:12:51.404 "seek_data": false, 00:12:51.404 "copy": false, 00:12:51.404 "nvme_iov_md": false 00:12:51.404 }, 00:12:51.404 "memory_domains": [ 00:12:51.404 { 00:12:51.404 "dma_device_id": "system", 00:12:51.404 "dma_device_type": 1 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.404 "dma_device_type": 2 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "dma_device_id": "system", 00:12:51.404 "dma_device_type": 1 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.404 "dma_device_type": 2 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "dma_device_id": "system", 00:12:51.404 "dma_device_type": 1 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.404 "dma_device_type": 2 00:12:51.404 } 00:12:51.404 ], 00:12:51.404 "driver_specific": { 00:12:51.404 "raid": { 00:12:51.404 "uuid": "70bf02cc-5f25-4b6c-9d7c-9e16db861fc5", 00:12:51.404 "strip_size_kb": 0, 00:12:51.404 "state": "online", 00:12:51.404 "raid_level": "raid1", 00:12:51.404 "superblock": false, 00:12:51.404 "num_base_bdevs": 3, 00:12:51.404 "num_base_bdevs_discovered": 3, 00:12:51.404 "num_base_bdevs_operational": 3, 00:12:51.404 "base_bdevs_list": [ 00:12:51.404 { 00:12:51.404 "name": "BaseBdev1", 00:12:51.404 "uuid": "4796c13f-ad72-4354-a4ae-f9993a69d0c1", 00:12:51.404 "is_configured": true, 00:12:51.404 "data_offset": 0, 00:12:51.404 "data_size": 65536 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "name": "BaseBdev2", 00:12:51.404 "uuid": "63472350-ca96-420f-8734-cc05f1769785", 00:12:51.404 "is_configured": true, 00:12:51.404 "data_offset": 0, 00:12:51.404 "data_size": 65536 00:12:51.404 }, 00:12:51.404 { 00:12:51.404 "name": "BaseBdev3", 00:12:51.404 "uuid": "7d9112b9-b1c0-4871-8255-25e018f7abe4", 00:12:51.404 "is_configured": true, 00:12:51.404 "data_offset": 0, 00:12:51.404 "data_size": 65536 00:12:51.404 } 00:12:51.404 ] 00:12:51.404 } 00:12:51.404 } 00:12:51.404 }' 00:12:51.404 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.689 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:51.689 BaseBdev2 00:12:51.689 BaseBdev3' 00:12:51.689 20:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.689 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.689 [2024-11-26 20:25:45.164617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.947 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.947 "name": "Existed_Raid", 00:12:51.947 "uuid": "70bf02cc-5f25-4b6c-9d7c-9e16db861fc5", 00:12:51.947 "strip_size_kb": 0, 00:12:51.947 "state": "online", 00:12:51.947 "raid_level": "raid1", 00:12:51.947 "superblock": false, 00:12:51.947 "num_base_bdevs": 3, 00:12:51.947 "num_base_bdevs_discovered": 2, 00:12:51.947 "num_base_bdevs_operational": 2, 00:12:51.947 "base_bdevs_list": [ 00:12:51.947 { 00:12:51.947 "name": null, 00:12:51.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.948 "is_configured": false, 00:12:51.948 "data_offset": 0, 00:12:51.948 "data_size": 65536 00:12:51.948 }, 00:12:51.948 { 00:12:51.948 "name": "BaseBdev2", 00:12:51.948 "uuid": "63472350-ca96-420f-8734-cc05f1769785", 00:12:51.948 "is_configured": true, 00:12:51.948 "data_offset": 0, 00:12:51.948 "data_size": 65536 00:12:51.948 }, 00:12:51.948 { 00:12:51.948 "name": "BaseBdev3", 00:12:51.948 "uuid": "7d9112b9-b1c0-4871-8255-25e018f7abe4", 00:12:51.948 "is_configured": true, 00:12:51.948 "data_offset": 0, 00:12:51.948 "data_size": 65536 00:12:51.948 } 00:12:51.948 ] 00:12:51.948 }' 00:12:51.948 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.948 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.208 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.208 [2024-11-26 20:25:45.761078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.467 20:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.467 [2024-11-26 20:25:45.929018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.467 [2024-11-26 20:25:45.929188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.727 [2024-11-26 20:25:46.037615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.727 [2024-11-26 20:25:46.037768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.727 [2024-11-26 20:25:46.037832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.727 BaseBdev2 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.727 [ 00:12:52.727 { 00:12:52.727 "name": "BaseBdev2", 00:12:52.727 "aliases": [ 00:12:52.727 "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d" 00:12:52.727 ], 00:12:52.727 "product_name": "Malloc disk", 00:12:52.727 "block_size": 512, 00:12:52.727 "num_blocks": 65536, 00:12:52.727 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:52.727 "assigned_rate_limits": { 00:12:52.727 "rw_ios_per_sec": 0, 00:12:52.727 "rw_mbytes_per_sec": 0, 00:12:52.727 "r_mbytes_per_sec": 0, 00:12:52.727 "w_mbytes_per_sec": 0 00:12:52.727 }, 00:12:52.727 "claimed": false, 00:12:52.727 "zoned": false, 00:12:52.727 "supported_io_types": { 00:12:52.727 "read": true, 00:12:52.727 "write": true, 00:12:52.727 "unmap": true, 00:12:52.727 "flush": true, 00:12:52.727 "reset": true, 00:12:52.727 "nvme_admin": false, 00:12:52.727 "nvme_io": false, 00:12:52.727 "nvme_io_md": false, 00:12:52.727 "write_zeroes": true, 00:12:52.727 "zcopy": true, 00:12:52.727 "get_zone_info": false, 00:12:52.727 "zone_management": false, 00:12:52.727 "zone_append": false, 00:12:52.727 "compare": false, 00:12:52.727 "compare_and_write": false, 00:12:52.727 "abort": true, 00:12:52.727 "seek_hole": false, 00:12:52.727 "seek_data": false, 00:12:52.727 "copy": true, 00:12:52.727 "nvme_iov_md": false 00:12:52.727 }, 00:12:52.727 "memory_domains": [ 00:12:52.727 { 00:12:52.727 "dma_device_id": "system", 00:12:52.727 "dma_device_type": 1 00:12:52.727 }, 00:12:52.727 { 00:12:52.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.727 "dma_device_type": 2 00:12:52.727 } 00:12:52.727 ], 00:12:52.727 "driver_specific": {} 00:12:52.727 } 00:12:52.727 ] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.727 BaseBdev3 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:52.727 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.728 [ 00:12:52.728 { 00:12:52.728 "name": "BaseBdev3", 00:12:52.728 "aliases": [ 00:12:52.728 "91a17903-7c8d-4388-a0d1-e593b41aa208" 00:12:52.728 ], 00:12:52.728 "product_name": "Malloc disk", 00:12:52.728 "block_size": 512, 00:12:52.728 "num_blocks": 65536, 00:12:52.728 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:52.728 "assigned_rate_limits": { 00:12:52.728 "rw_ios_per_sec": 0, 00:12:52.728 "rw_mbytes_per_sec": 0, 00:12:52.728 "r_mbytes_per_sec": 0, 00:12:52.728 "w_mbytes_per_sec": 0 00:12:52.728 }, 00:12:52.728 "claimed": false, 00:12:52.728 "zoned": false, 00:12:52.728 "supported_io_types": { 00:12:52.728 "read": true, 00:12:52.728 "write": true, 00:12:52.728 "unmap": true, 00:12:52.728 "flush": true, 00:12:52.728 "reset": true, 00:12:52.728 "nvme_admin": false, 00:12:52.728 "nvme_io": false, 00:12:52.728 "nvme_io_md": false, 00:12:52.728 "write_zeroes": true, 00:12:52.728 "zcopy": true, 00:12:52.728 "get_zone_info": false, 00:12:52.728 "zone_management": false, 00:12:52.728 "zone_append": false, 00:12:52.728 "compare": false, 00:12:52.728 "compare_and_write": false, 00:12:52.728 "abort": true, 00:12:52.728 "seek_hole": false, 00:12:52.728 "seek_data": false, 00:12:52.728 "copy": true, 00:12:52.728 "nvme_iov_md": false 00:12:52.728 }, 00:12:52.728 "memory_domains": [ 00:12:52.728 { 00:12:52.728 "dma_device_id": "system", 00:12:52.728 "dma_device_type": 1 00:12:52.728 }, 00:12:52.728 { 00:12:52.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.728 "dma_device_type": 2 00:12:52.728 } 00:12:52.728 ], 00:12:52.728 "driver_specific": {} 00:12:52.728 } 00:12:52.728 ] 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.728 [2024-11-26 20:25:46.271775] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.728 [2024-11-26 20:25:46.271909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.728 [2024-11-26 20:25:46.271964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.728 [2024-11-26 20:25:46.274118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.728 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.987 "name": "Existed_Raid", 00:12:52.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.987 "strip_size_kb": 0, 00:12:52.987 "state": "configuring", 00:12:52.987 "raid_level": "raid1", 00:12:52.987 "superblock": false, 00:12:52.987 "num_base_bdevs": 3, 00:12:52.987 "num_base_bdevs_discovered": 2, 00:12:52.987 "num_base_bdevs_operational": 3, 00:12:52.987 "base_bdevs_list": [ 00:12:52.987 { 00:12:52.987 "name": "BaseBdev1", 00:12:52.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.987 "is_configured": false, 00:12:52.987 "data_offset": 0, 00:12:52.987 "data_size": 0 00:12:52.987 }, 00:12:52.987 { 00:12:52.987 "name": "BaseBdev2", 00:12:52.987 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:52.987 "is_configured": true, 00:12:52.987 "data_offset": 0, 00:12:52.987 "data_size": 65536 00:12:52.987 }, 00:12:52.987 { 00:12:52.987 "name": "BaseBdev3", 00:12:52.987 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:52.987 "is_configured": true, 00:12:52.987 "data_offset": 0, 00:12:52.987 "data_size": 65536 00:12:52.987 } 00:12:52.987 ] 00:12:52.987 }' 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.987 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.272 [2024-11-26 20:25:46.739010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.272 "name": "Existed_Raid", 00:12:53.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.272 "strip_size_kb": 0, 00:12:53.272 "state": "configuring", 00:12:53.272 "raid_level": "raid1", 00:12:53.272 "superblock": false, 00:12:53.272 "num_base_bdevs": 3, 00:12:53.272 "num_base_bdevs_discovered": 1, 00:12:53.272 "num_base_bdevs_operational": 3, 00:12:53.272 "base_bdevs_list": [ 00:12:53.272 { 00:12:53.272 "name": "BaseBdev1", 00:12:53.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.272 "is_configured": false, 00:12:53.272 "data_offset": 0, 00:12:53.272 "data_size": 0 00:12:53.272 }, 00:12:53.272 { 00:12:53.272 "name": null, 00:12:53.272 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:53.272 "is_configured": false, 00:12:53.272 "data_offset": 0, 00:12:53.272 "data_size": 65536 00:12:53.272 }, 00:12:53.272 { 00:12:53.272 "name": "BaseBdev3", 00:12:53.272 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:53.272 "is_configured": true, 00:12:53.272 "data_offset": 0, 00:12:53.272 "data_size": 65536 00:12:53.272 } 00:12:53.272 ] 00:12:53.272 }' 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.272 20:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 [2024-11-26 20:25:47.270934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.838 BaseBdev1 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 [ 00:12:53.838 { 00:12:53.838 "name": "BaseBdev1", 00:12:53.838 "aliases": [ 00:12:53.838 "eb13d284-7b05-4331-84cd-42cf8fc0587c" 00:12:53.838 ], 00:12:53.838 "product_name": "Malloc disk", 00:12:53.838 "block_size": 512, 00:12:53.838 "num_blocks": 65536, 00:12:53.838 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:53.838 "assigned_rate_limits": { 00:12:53.838 "rw_ios_per_sec": 0, 00:12:53.838 "rw_mbytes_per_sec": 0, 00:12:53.838 "r_mbytes_per_sec": 0, 00:12:53.838 "w_mbytes_per_sec": 0 00:12:53.838 }, 00:12:53.838 "claimed": true, 00:12:53.838 "claim_type": "exclusive_write", 00:12:53.838 "zoned": false, 00:12:53.838 "supported_io_types": { 00:12:53.838 "read": true, 00:12:53.838 "write": true, 00:12:53.838 "unmap": true, 00:12:53.838 "flush": true, 00:12:53.838 "reset": true, 00:12:53.838 "nvme_admin": false, 00:12:53.838 "nvme_io": false, 00:12:53.838 "nvme_io_md": false, 00:12:53.838 "write_zeroes": true, 00:12:53.838 "zcopy": true, 00:12:53.838 "get_zone_info": false, 00:12:53.838 "zone_management": false, 00:12:53.838 "zone_append": false, 00:12:53.838 "compare": false, 00:12:53.838 "compare_and_write": false, 00:12:53.838 "abort": true, 00:12:53.838 "seek_hole": false, 00:12:53.838 "seek_data": false, 00:12:53.838 "copy": true, 00:12:53.838 "nvme_iov_md": false 00:12:53.838 }, 00:12:53.838 "memory_domains": [ 00:12:53.838 { 00:12:53.838 "dma_device_id": "system", 00:12:53.838 "dma_device_type": 1 00:12:53.838 }, 00:12:53.838 { 00:12:53.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.838 "dma_device_type": 2 00:12:53.838 } 00:12:53.838 ], 00:12:53.838 "driver_specific": {} 00:12:53.838 } 00:12:53.838 ] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.838 "name": "Existed_Raid", 00:12:53.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.838 "strip_size_kb": 0, 00:12:53.838 "state": "configuring", 00:12:53.838 "raid_level": "raid1", 00:12:53.838 "superblock": false, 00:12:53.838 "num_base_bdevs": 3, 00:12:53.838 "num_base_bdevs_discovered": 2, 00:12:53.838 "num_base_bdevs_operational": 3, 00:12:53.838 "base_bdevs_list": [ 00:12:53.838 { 00:12:53.838 "name": "BaseBdev1", 00:12:53.838 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:53.838 "is_configured": true, 00:12:53.838 "data_offset": 0, 00:12:53.838 "data_size": 65536 00:12:53.838 }, 00:12:53.838 { 00:12:53.838 "name": null, 00:12:53.838 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:53.838 "is_configured": false, 00:12:53.838 "data_offset": 0, 00:12:53.838 "data_size": 65536 00:12:53.838 }, 00:12:53.838 { 00:12:53.838 "name": "BaseBdev3", 00:12:53.838 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:53.838 "is_configured": true, 00:12:53.838 "data_offset": 0, 00:12:53.838 "data_size": 65536 00:12:53.838 } 00:12:53.838 ] 00:12:53.838 }' 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.838 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.405 [2024-11-26 20:25:47.790122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.405 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.405 "name": "Existed_Raid", 00:12:54.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.405 "strip_size_kb": 0, 00:12:54.405 "state": "configuring", 00:12:54.405 "raid_level": "raid1", 00:12:54.406 "superblock": false, 00:12:54.406 "num_base_bdevs": 3, 00:12:54.406 "num_base_bdevs_discovered": 1, 00:12:54.406 "num_base_bdevs_operational": 3, 00:12:54.406 "base_bdevs_list": [ 00:12:54.406 { 00:12:54.406 "name": "BaseBdev1", 00:12:54.406 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:54.406 "is_configured": true, 00:12:54.406 "data_offset": 0, 00:12:54.406 "data_size": 65536 00:12:54.406 }, 00:12:54.406 { 00:12:54.406 "name": null, 00:12:54.406 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:54.406 "is_configured": false, 00:12:54.406 "data_offset": 0, 00:12:54.406 "data_size": 65536 00:12:54.406 }, 00:12:54.406 { 00:12:54.406 "name": null, 00:12:54.406 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:54.406 "is_configured": false, 00:12:54.406 "data_offset": 0, 00:12:54.406 "data_size": 65536 00:12:54.406 } 00:12:54.406 ] 00:12:54.406 }' 00:12:54.406 20:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.406 20:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.972 [2024-11-26 20:25:48.313323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.972 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.972 "name": "Existed_Raid", 00:12:54.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.972 "strip_size_kb": 0, 00:12:54.972 "state": "configuring", 00:12:54.973 "raid_level": "raid1", 00:12:54.973 "superblock": false, 00:12:54.973 "num_base_bdevs": 3, 00:12:54.973 "num_base_bdevs_discovered": 2, 00:12:54.973 "num_base_bdevs_operational": 3, 00:12:54.973 "base_bdevs_list": [ 00:12:54.973 { 00:12:54.973 "name": "BaseBdev1", 00:12:54.973 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:54.973 "is_configured": true, 00:12:54.973 "data_offset": 0, 00:12:54.973 "data_size": 65536 00:12:54.973 }, 00:12:54.973 { 00:12:54.973 "name": null, 00:12:54.973 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:54.973 "is_configured": false, 00:12:54.973 "data_offset": 0, 00:12:54.973 "data_size": 65536 00:12:54.973 }, 00:12:54.973 { 00:12:54.973 "name": "BaseBdev3", 00:12:54.973 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:54.973 "is_configured": true, 00:12:54.973 "data_offset": 0, 00:12:54.973 "data_size": 65536 00:12:54.973 } 00:12:54.973 ] 00:12:54.973 }' 00:12:54.973 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.973 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.231 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:55.231 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.231 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.231 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.490 [2024-11-26 20:25:48.804485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.490 "name": "Existed_Raid", 00:12:55.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.490 "strip_size_kb": 0, 00:12:55.490 "state": "configuring", 00:12:55.490 "raid_level": "raid1", 00:12:55.490 "superblock": false, 00:12:55.490 "num_base_bdevs": 3, 00:12:55.490 "num_base_bdevs_discovered": 1, 00:12:55.490 "num_base_bdevs_operational": 3, 00:12:55.490 "base_bdevs_list": [ 00:12:55.490 { 00:12:55.490 "name": null, 00:12:55.490 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:55.490 "is_configured": false, 00:12:55.490 "data_offset": 0, 00:12:55.490 "data_size": 65536 00:12:55.490 }, 00:12:55.490 { 00:12:55.490 "name": null, 00:12:55.490 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:55.490 "is_configured": false, 00:12:55.490 "data_offset": 0, 00:12:55.490 "data_size": 65536 00:12:55.490 }, 00:12:55.490 { 00:12:55.490 "name": "BaseBdev3", 00:12:55.490 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:55.490 "is_configured": true, 00:12:55.490 "data_offset": 0, 00:12:55.490 "data_size": 65536 00:12:55.490 } 00:12:55.490 ] 00:12:55.490 }' 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.490 20:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.060 [2024-11-26 20:25:49.414392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.060 "name": "Existed_Raid", 00:12:56.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.060 "strip_size_kb": 0, 00:12:56.060 "state": "configuring", 00:12:56.060 "raid_level": "raid1", 00:12:56.060 "superblock": false, 00:12:56.060 "num_base_bdevs": 3, 00:12:56.060 "num_base_bdevs_discovered": 2, 00:12:56.060 "num_base_bdevs_operational": 3, 00:12:56.060 "base_bdevs_list": [ 00:12:56.060 { 00:12:56.060 "name": null, 00:12:56.060 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:56.060 "is_configured": false, 00:12:56.060 "data_offset": 0, 00:12:56.060 "data_size": 65536 00:12:56.060 }, 00:12:56.060 { 00:12:56.060 "name": "BaseBdev2", 00:12:56.060 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:56.060 "is_configured": true, 00:12:56.060 "data_offset": 0, 00:12:56.060 "data_size": 65536 00:12:56.060 }, 00:12:56.060 { 00:12:56.060 "name": "BaseBdev3", 00:12:56.060 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:56.060 "is_configured": true, 00:12:56.060 "data_offset": 0, 00:12:56.060 "data_size": 65536 00:12:56.060 } 00:12:56.060 ] 00:12:56.060 }' 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.060 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.318 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.318 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.318 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.318 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb13d284-7b05-4331-84cd-42cf8fc0587c 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 [2024-11-26 20:25:49.972042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:56.578 [2024-11-26 20:25:49.972094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:56.578 [2024-11-26 20:25:49.972103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:56.578 [2024-11-26 20:25:49.972369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:56.578 [2024-11-26 20:25:49.972542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:56.578 [2024-11-26 20:25:49.972554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:56.578 [2024-11-26 20:25:49.972840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.578 NewBaseBdev 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.578 20:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 [ 00:12:56.578 { 00:12:56.578 "name": "NewBaseBdev", 00:12:56.578 "aliases": [ 00:12:56.578 "eb13d284-7b05-4331-84cd-42cf8fc0587c" 00:12:56.578 ], 00:12:56.578 "product_name": "Malloc disk", 00:12:56.578 "block_size": 512, 00:12:56.578 "num_blocks": 65536, 00:12:56.578 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:56.578 "assigned_rate_limits": { 00:12:56.578 "rw_ios_per_sec": 0, 00:12:56.578 "rw_mbytes_per_sec": 0, 00:12:56.578 "r_mbytes_per_sec": 0, 00:12:56.578 "w_mbytes_per_sec": 0 00:12:56.578 }, 00:12:56.578 "claimed": true, 00:12:56.578 "claim_type": "exclusive_write", 00:12:56.578 "zoned": false, 00:12:56.578 "supported_io_types": { 00:12:56.578 "read": true, 00:12:56.578 "write": true, 00:12:56.578 "unmap": true, 00:12:56.578 "flush": true, 00:12:56.578 "reset": true, 00:12:56.578 "nvme_admin": false, 00:12:56.578 "nvme_io": false, 00:12:56.578 "nvme_io_md": false, 00:12:56.578 "write_zeroes": true, 00:12:56.578 "zcopy": true, 00:12:56.578 "get_zone_info": false, 00:12:56.578 "zone_management": false, 00:12:56.578 "zone_append": false, 00:12:56.578 "compare": false, 00:12:56.578 "compare_and_write": false, 00:12:56.578 "abort": true, 00:12:56.578 "seek_hole": false, 00:12:56.578 "seek_data": false, 00:12:56.578 "copy": true, 00:12:56.578 "nvme_iov_md": false 00:12:56.578 }, 00:12:56.578 "memory_domains": [ 00:12:56.578 { 00:12:56.578 "dma_device_id": "system", 00:12:56.578 "dma_device_type": 1 00:12:56.578 }, 00:12:56.578 { 00:12:56.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.578 "dma_device_type": 2 00:12:56.578 } 00:12:56.578 ], 00:12:56.578 "driver_specific": {} 00:12:56.578 } 00:12:56.578 ] 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.578 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.579 "name": "Existed_Raid", 00:12:56.579 "uuid": "8af1eb8e-5ed0-4938-8bc5-3bfdcc9bff4f", 00:12:56.579 "strip_size_kb": 0, 00:12:56.579 "state": "online", 00:12:56.579 "raid_level": "raid1", 00:12:56.579 "superblock": false, 00:12:56.579 "num_base_bdevs": 3, 00:12:56.579 "num_base_bdevs_discovered": 3, 00:12:56.579 "num_base_bdevs_operational": 3, 00:12:56.579 "base_bdevs_list": [ 00:12:56.579 { 00:12:56.579 "name": "NewBaseBdev", 00:12:56.579 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:56.579 "is_configured": true, 00:12:56.579 "data_offset": 0, 00:12:56.579 "data_size": 65536 00:12:56.579 }, 00:12:56.579 { 00:12:56.579 "name": "BaseBdev2", 00:12:56.579 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:56.579 "is_configured": true, 00:12:56.579 "data_offset": 0, 00:12:56.579 "data_size": 65536 00:12:56.579 }, 00:12:56.579 { 00:12:56.579 "name": "BaseBdev3", 00:12:56.579 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:56.579 "is_configured": true, 00:12:56.579 "data_offset": 0, 00:12:56.579 "data_size": 65536 00:12:56.579 } 00:12:56.579 ] 00:12:56.579 }' 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.579 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.147 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.147 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.147 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.147 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.148 [2024-11-26 20:25:50.471578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.148 "name": "Existed_Raid", 00:12:57.148 "aliases": [ 00:12:57.148 "8af1eb8e-5ed0-4938-8bc5-3bfdcc9bff4f" 00:12:57.148 ], 00:12:57.148 "product_name": "Raid Volume", 00:12:57.148 "block_size": 512, 00:12:57.148 "num_blocks": 65536, 00:12:57.148 "uuid": "8af1eb8e-5ed0-4938-8bc5-3bfdcc9bff4f", 00:12:57.148 "assigned_rate_limits": { 00:12:57.148 "rw_ios_per_sec": 0, 00:12:57.148 "rw_mbytes_per_sec": 0, 00:12:57.148 "r_mbytes_per_sec": 0, 00:12:57.148 "w_mbytes_per_sec": 0 00:12:57.148 }, 00:12:57.148 "claimed": false, 00:12:57.148 "zoned": false, 00:12:57.148 "supported_io_types": { 00:12:57.148 "read": true, 00:12:57.148 "write": true, 00:12:57.148 "unmap": false, 00:12:57.148 "flush": false, 00:12:57.148 "reset": true, 00:12:57.148 "nvme_admin": false, 00:12:57.148 "nvme_io": false, 00:12:57.148 "nvme_io_md": false, 00:12:57.148 "write_zeroes": true, 00:12:57.148 "zcopy": false, 00:12:57.148 "get_zone_info": false, 00:12:57.148 "zone_management": false, 00:12:57.148 "zone_append": false, 00:12:57.148 "compare": false, 00:12:57.148 "compare_and_write": false, 00:12:57.148 "abort": false, 00:12:57.148 "seek_hole": false, 00:12:57.148 "seek_data": false, 00:12:57.148 "copy": false, 00:12:57.148 "nvme_iov_md": false 00:12:57.148 }, 00:12:57.148 "memory_domains": [ 00:12:57.148 { 00:12:57.148 "dma_device_id": "system", 00:12:57.148 "dma_device_type": 1 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.148 "dma_device_type": 2 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "dma_device_id": "system", 00:12:57.148 "dma_device_type": 1 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.148 "dma_device_type": 2 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "dma_device_id": "system", 00:12:57.148 "dma_device_type": 1 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.148 "dma_device_type": 2 00:12:57.148 } 00:12:57.148 ], 00:12:57.148 "driver_specific": { 00:12:57.148 "raid": { 00:12:57.148 "uuid": "8af1eb8e-5ed0-4938-8bc5-3bfdcc9bff4f", 00:12:57.148 "strip_size_kb": 0, 00:12:57.148 "state": "online", 00:12:57.148 "raid_level": "raid1", 00:12:57.148 "superblock": false, 00:12:57.148 "num_base_bdevs": 3, 00:12:57.148 "num_base_bdevs_discovered": 3, 00:12:57.148 "num_base_bdevs_operational": 3, 00:12:57.148 "base_bdevs_list": [ 00:12:57.148 { 00:12:57.148 "name": "NewBaseBdev", 00:12:57.148 "uuid": "eb13d284-7b05-4331-84cd-42cf8fc0587c", 00:12:57.148 "is_configured": true, 00:12:57.148 "data_offset": 0, 00:12:57.148 "data_size": 65536 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "name": "BaseBdev2", 00:12:57.148 "uuid": "40d41620-0b4a-4dfe-bf7b-f0c8a23df46d", 00:12:57.148 "is_configured": true, 00:12:57.148 "data_offset": 0, 00:12:57.148 "data_size": 65536 00:12:57.148 }, 00:12:57.148 { 00:12:57.148 "name": "BaseBdev3", 00:12:57.148 "uuid": "91a17903-7c8d-4388-a0d1-e593b41aa208", 00:12:57.148 "is_configured": true, 00:12:57.148 "data_offset": 0, 00:12:57.148 "data_size": 65536 00:12:57.148 } 00:12:57.148 ] 00:12:57.148 } 00:12:57.148 } 00:12:57.148 }' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:57.148 BaseBdev2 00:12:57.148 BaseBdev3' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.148 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.408 [2024-11-26 20:25:50.770738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:57.408 [2024-11-26 20:25:50.770772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.408 [2024-11-26 20:25:50.770855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.408 [2024-11-26 20:25:50.771143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.408 [2024-11-26 20:25:50.771154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67704 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67704 ']' 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67704 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67704 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67704' 00:12:57.408 killing process with pid 67704 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67704 00:12:57.408 [2024-11-26 20:25:50.814222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.408 20:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67704 00:12:57.667 [2024-11-26 20:25:51.124324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:59.048 00:12:59.048 real 0m10.860s 00:12:59.048 user 0m17.274s 00:12:59.048 sys 0m1.849s 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.048 ************************************ 00:12:59.048 END TEST raid_state_function_test 00:12:59.048 ************************************ 00:12:59.048 20:25:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:59.048 20:25:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:59.048 20:25:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.048 20:25:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.048 ************************************ 00:12:59.048 START TEST raid_state_function_test_sb 00:12:59.048 ************************************ 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68325 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68325' 00:12:59.048 Process raid pid: 68325 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68325 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68325 ']' 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.048 20:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.048 [2024-11-26 20:25:52.457123] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:12:59.048 [2024-11-26 20:25:52.457849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.307 [2024-11-26 20:25:52.632931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.307 [2024-11-26 20:25:52.754303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.567 [2024-11-26 20:25:52.968325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.567 [2024-11-26 20:25:52.968455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.827 [2024-11-26 20:25:53.308931] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.827 [2024-11-26 20:25:53.308992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.827 [2024-11-26 20:25:53.309008] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.827 [2024-11-26 20:25:53.309019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.827 [2024-11-26 20:25:53.309026] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.827 [2024-11-26 20:25:53.309035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.827 "name": "Existed_Raid", 00:12:59.827 "uuid": "ccc45aa0-52b2-4721-b0bd-5b2e407288b9", 00:12:59.827 "strip_size_kb": 0, 00:12:59.827 "state": "configuring", 00:12:59.827 "raid_level": "raid1", 00:12:59.827 "superblock": true, 00:12:59.827 "num_base_bdevs": 3, 00:12:59.827 "num_base_bdevs_discovered": 0, 00:12:59.827 "num_base_bdevs_operational": 3, 00:12:59.827 "base_bdevs_list": [ 00:12:59.827 { 00:12:59.827 "name": "BaseBdev1", 00:12:59.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.827 "is_configured": false, 00:12:59.827 "data_offset": 0, 00:12:59.827 "data_size": 0 00:12:59.827 }, 00:12:59.827 { 00:12:59.827 "name": "BaseBdev2", 00:12:59.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.827 "is_configured": false, 00:12:59.827 "data_offset": 0, 00:12:59.827 "data_size": 0 00:12:59.827 }, 00:12:59.827 { 00:12:59.827 "name": "BaseBdev3", 00:12:59.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.827 "is_configured": false, 00:12:59.827 "data_offset": 0, 00:12:59.827 "data_size": 0 00:12:59.827 } 00:12:59.827 ] 00:12:59.827 }' 00:12:59.827 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.828 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 [2024-11-26 20:25:53.696287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.398 [2024-11-26 20:25:53.696380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 [2024-11-26 20:25:53.704261] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.398 [2024-11-26 20:25:53.704308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.398 [2024-11-26 20:25:53.704319] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.398 [2024-11-26 20:25:53.704329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.398 [2024-11-26 20:25:53.704337] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.398 [2024-11-26 20:25:53.704346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 [2024-11-26 20:25:53.751730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.398 BaseBdev1 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 [ 00:13:00.398 { 00:13:00.398 "name": "BaseBdev1", 00:13:00.398 "aliases": [ 00:13:00.398 "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13" 00:13:00.398 ], 00:13:00.398 "product_name": "Malloc disk", 00:13:00.398 "block_size": 512, 00:13:00.398 "num_blocks": 65536, 00:13:00.398 "uuid": "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13", 00:13:00.398 "assigned_rate_limits": { 00:13:00.398 "rw_ios_per_sec": 0, 00:13:00.398 "rw_mbytes_per_sec": 0, 00:13:00.398 "r_mbytes_per_sec": 0, 00:13:00.398 "w_mbytes_per_sec": 0 00:13:00.398 }, 00:13:00.398 "claimed": true, 00:13:00.398 "claim_type": "exclusive_write", 00:13:00.398 "zoned": false, 00:13:00.398 "supported_io_types": { 00:13:00.398 "read": true, 00:13:00.398 "write": true, 00:13:00.398 "unmap": true, 00:13:00.398 "flush": true, 00:13:00.398 "reset": true, 00:13:00.398 "nvme_admin": false, 00:13:00.398 "nvme_io": false, 00:13:00.398 "nvme_io_md": false, 00:13:00.398 "write_zeroes": true, 00:13:00.398 "zcopy": true, 00:13:00.398 "get_zone_info": false, 00:13:00.398 "zone_management": false, 00:13:00.398 "zone_append": false, 00:13:00.398 "compare": false, 00:13:00.398 "compare_and_write": false, 00:13:00.398 "abort": true, 00:13:00.398 "seek_hole": false, 00:13:00.398 "seek_data": false, 00:13:00.398 "copy": true, 00:13:00.398 "nvme_iov_md": false 00:13:00.398 }, 00:13:00.398 "memory_domains": [ 00:13:00.398 { 00:13:00.398 "dma_device_id": "system", 00:13:00.398 "dma_device_type": 1 00:13:00.398 }, 00:13:00.398 { 00:13:00.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.398 "dma_device_type": 2 00:13:00.398 } 00:13:00.398 ], 00:13:00.398 "driver_specific": {} 00:13:00.398 } 00:13:00.398 ] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.398 "name": "Existed_Raid", 00:13:00.398 "uuid": "74c7d02e-8bba-40cd-8b00-52491d50d13b", 00:13:00.398 "strip_size_kb": 0, 00:13:00.398 "state": "configuring", 00:13:00.398 "raid_level": "raid1", 00:13:00.398 "superblock": true, 00:13:00.398 "num_base_bdevs": 3, 00:13:00.398 "num_base_bdevs_discovered": 1, 00:13:00.398 "num_base_bdevs_operational": 3, 00:13:00.398 "base_bdevs_list": [ 00:13:00.398 { 00:13:00.398 "name": "BaseBdev1", 00:13:00.398 "uuid": "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13", 00:13:00.398 "is_configured": true, 00:13:00.398 "data_offset": 2048, 00:13:00.398 "data_size": 63488 00:13:00.398 }, 00:13:00.398 { 00:13:00.398 "name": "BaseBdev2", 00:13:00.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.398 "is_configured": false, 00:13:00.398 "data_offset": 0, 00:13:00.398 "data_size": 0 00:13:00.398 }, 00:13:00.398 { 00:13:00.398 "name": "BaseBdev3", 00:13:00.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.398 "is_configured": false, 00:13:00.398 "data_offset": 0, 00:13:00.398 "data_size": 0 00:13:00.398 } 00:13:00.398 ] 00:13:00.398 }' 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.398 20:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.974 [2024-11-26 20:25:54.242951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.974 [2024-11-26 20:25:54.243055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.974 [2024-11-26 20:25:54.255015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.974 [2024-11-26 20:25:54.257140] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.974 [2024-11-26 20:25:54.257228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.974 [2024-11-26 20:25:54.257275] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.974 [2024-11-26 20:25:54.257303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.974 "name": "Existed_Raid", 00:13:00.974 "uuid": "9f22be9b-8b35-47b2-af01-badb32863e6a", 00:13:00.974 "strip_size_kb": 0, 00:13:00.974 "state": "configuring", 00:13:00.974 "raid_level": "raid1", 00:13:00.974 "superblock": true, 00:13:00.974 "num_base_bdevs": 3, 00:13:00.974 "num_base_bdevs_discovered": 1, 00:13:00.974 "num_base_bdevs_operational": 3, 00:13:00.974 "base_bdevs_list": [ 00:13:00.974 { 00:13:00.974 "name": "BaseBdev1", 00:13:00.974 "uuid": "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13", 00:13:00.974 "is_configured": true, 00:13:00.974 "data_offset": 2048, 00:13:00.974 "data_size": 63488 00:13:00.974 }, 00:13:00.974 { 00:13:00.974 "name": "BaseBdev2", 00:13:00.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.974 "is_configured": false, 00:13:00.974 "data_offset": 0, 00:13:00.974 "data_size": 0 00:13:00.974 }, 00:13:00.974 { 00:13:00.974 "name": "BaseBdev3", 00:13:00.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.974 "is_configured": false, 00:13:00.974 "data_offset": 0, 00:13:00.974 "data_size": 0 00:13:00.974 } 00:13:00.974 ] 00:13:00.974 }' 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.974 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 [2024-11-26 20:25:54.723064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.233 BaseBdev2 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.233 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 [ 00:13:01.233 { 00:13:01.233 "name": "BaseBdev2", 00:13:01.233 "aliases": [ 00:13:01.233 "56c0c728-53bb-4e4f-9143-ca1fd1e1826c" 00:13:01.233 ], 00:13:01.233 "product_name": "Malloc disk", 00:13:01.233 "block_size": 512, 00:13:01.233 "num_blocks": 65536, 00:13:01.233 "uuid": "56c0c728-53bb-4e4f-9143-ca1fd1e1826c", 00:13:01.233 "assigned_rate_limits": { 00:13:01.233 "rw_ios_per_sec": 0, 00:13:01.233 "rw_mbytes_per_sec": 0, 00:13:01.233 "r_mbytes_per_sec": 0, 00:13:01.233 "w_mbytes_per_sec": 0 00:13:01.233 }, 00:13:01.233 "claimed": true, 00:13:01.234 "claim_type": "exclusive_write", 00:13:01.234 "zoned": false, 00:13:01.234 "supported_io_types": { 00:13:01.234 "read": true, 00:13:01.234 "write": true, 00:13:01.234 "unmap": true, 00:13:01.234 "flush": true, 00:13:01.234 "reset": true, 00:13:01.234 "nvme_admin": false, 00:13:01.234 "nvme_io": false, 00:13:01.234 "nvme_io_md": false, 00:13:01.234 "write_zeroes": true, 00:13:01.234 "zcopy": true, 00:13:01.234 "get_zone_info": false, 00:13:01.234 "zone_management": false, 00:13:01.234 "zone_append": false, 00:13:01.234 "compare": false, 00:13:01.234 "compare_and_write": false, 00:13:01.234 "abort": true, 00:13:01.234 "seek_hole": false, 00:13:01.234 "seek_data": false, 00:13:01.234 "copy": true, 00:13:01.234 "nvme_iov_md": false 00:13:01.234 }, 00:13:01.234 "memory_domains": [ 00:13:01.234 { 00:13:01.234 "dma_device_id": "system", 00:13:01.234 "dma_device_type": 1 00:13:01.234 }, 00:13:01.234 { 00:13:01.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.234 "dma_device_type": 2 00:13:01.234 } 00:13:01.234 ], 00:13:01.234 "driver_specific": {} 00:13:01.234 } 00:13:01.234 ] 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.234 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.493 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.493 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.493 "name": "Existed_Raid", 00:13:01.493 "uuid": "9f22be9b-8b35-47b2-af01-badb32863e6a", 00:13:01.493 "strip_size_kb": 0, 00:13:01.493 "state": "configuring", 00:13:01.493 "raid_level": "raid1", 00:13:01.493 "superblock": true, 00:13:01.493 "num_base_bdevs": 3, 00:13:01.493 "num_base_bdevs_discovered": 2, 00:13:01.493 "num_base_bdevs_operational": 3, 00:13:01.493 "base_bdevs_list": [ 00:13:01.493 { 00:13:01.493 "name": "BaseBdev1", 00:13:01.493 "uuid": "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13", 00:13:01.493 "is_configured": true, 00:13:01.493 "data_offset": 2048, 00:13:01.493 "data_size": 63488 00:13:01.493 }, 00:13:01.493 { 00:13:01.493 "name": "BaseBdev2", 00:13:01.493 "uuid": "56c0c728-53bb-4e4f-9143-ca1fd1e1826c", 00:13:01.493 "is_configured": true, 00:13:01.493 "data_offset": 2048, 00:13:01.493 "data_size": 63488 00:13:01.493 }, 00:13:01.493 { 00:13:01.493 "name": "BaseBdev3", 00:13:01.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.493 "is_configured": false, 00:13:01.493 "data_offset": 0, 00:13:01.493 "data_size": 0 00:13:01.493 } 00:13:01.493 ] 00:13:01.493 }' 00:13:01.493 20:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.493 20:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.753 [2024-11-26 20:25:55.275101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.753 BaseBdev3 00:13:01.753 [2024-11-26 20:25:55.275469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.753 [2024-11-26 20:25:55.275495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.753 [2024-11-26 20:25:55.275772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:01.753 [2024-11-26 20:25:55.275938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.753 [2024-11-26 20:25:55.275948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:01.753 [2024-11-26 20:25:55.276085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.753 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.754 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:01.754 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.754 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.754 [ 00:13:01.754 { 00:13:01.754 "name": "BaseBdev3", 00:13:01.754 "aliases": [ 00:13:01.754 "a2acfe9b-9bb3-470a-982f-c3d0cb6a5dfe" 00:13:01.754 ], 00:13:01.754 "product_name": "Malloc disk", 00:13:01.754 "block_size": 512, 00:13:01.754 "num_blocks": 65536, 00:13:01.754 "uuid": "a2acfe9b-9bb3-470a-982f-c3d0cb6a5dfe", 00:13:01.754 "assigned_rate_limits": { 00:13:01.754 "rw_ios_per_sec": 0, 00:13:01.754 "rw_mbytes_per_sec": 0, 00:13:01.754 "r_mbytes_per_sec": 0, 00:13:01.754 "w_mbytes_per_sec": 0 00:13:01.754 }, 00:13:01.754 "claimed": true, 00:13:01.754 "claim_type": "exclusive_write", 00:13:01.754 "zoned": false, 00:13:01.754 "supported_io_types": { 00:13:01.754 "read": true, 00:13:01.754 "write": true, 00:13:01.754 "unmap": true, 00:13:01.754 "flush": true, 00:13:02.013 "reset": true, 00:13:02.013 "nvme_admin": false, 00:13:02.013 "nvme_io": false, 00:13:02.013 "nvme_io_md": false, 00:13:02.013 "write_zeroes": true, 00:13:02.013 "zcopy": true, 00:13:02.013 "get_zone_info": false, 00:13:02.013 "zone_management": false, 00:13:02.013 "zone_append": false, 00:13:02.013 "compare": false, 00:13:02.013 "compare_and_write": false, 00:13:02.013 "abort": true, 00:13:02.013 "seek_hole": false, 00:13:02.014 "seek_data": false, 00:13:02.014 "copy": true, 00:13:02.014 "nvme_iov_md": false 00:13:02.014 }, 00:13:02.014 "memory_domains": [ 00:13:02.014 { 00:13:02.014 "dma_device_id": "system", 00:13:02.014 "dma_device_type": 1 00:13:02.014 }, 00:13:02.014 { 00:13:02.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.014 "dma_device_type": 2 00:13:02.014 } 00:13:02.014 ], 00:13:02.014 "driver_specific": {} 00:13:02.014 } 00:13:02.014 ] 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.014 "name": "Existed_Raid", 00:13:02.014 "uuid": "9f22be9b-8b35-47b2-af01-badb32863e6a", 00:13:02.014 "strip_size_kb": 0, 00:13:02.014 "state": "online", 00:13:02.014 "raid_level": "raid1", 00:13:02.014 "superblock": true, 00:13:02.014 "num_base_bdevs": 3, 00:13:02.014 "num_base_bdevs_discovered": 3, 00:13:02.014 "num_base_bdevs_operational": 3, 00:13:02.014 "base_bdevs_list": [ 00:13:02.014 { 00:13:02.014 "name": "BaseBdev1", 00:13:02.014 "uuid": "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13", 00:13:02.014 "is_configured": true, 00:13:02.014 "data_offset": 2048, 00:13:02.014 "data_size": 63488 00:13:02.014 }, 00:13:02.014 { 00:13:02.014 "name": "BaseBdev2", 00:13:02.014 "uuid": "56c0c728-53bb-4e4f-9143-ca1fd1e1826c", 00:13:02.014 "is_configured": true, 00:13:02.014 "data_offset": 2048, 00:13:02.014 "data_size": 63488 00:13:02.014 }, 00:13:02.014 { 00:13:02.014 "name": "BaseBdev3", 00:13:02.014 "uuid": "a2acfe9b-9bb3-470a-982f-c3d0cb6a5dfe", 00:13:02.014 "is_configured": true, 00:13:02.014 "data_offset": 2048, 00:13:02.014 "data_size": 63488 00:13:02.014 } 00:13:02.014 ] 00:13:02.014 }' 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.014 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.274 [2024-11-26 20:25:55.786650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.274 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.274 "name": "Existed_Raid", 00:13:02.274 "aliases": [ 00:13:02.274 "9f22be9b-8b35-47b2-af01-badb32863e6a" 00:13:02.274 ], 00:13:02.274 "product_name": "Raid Volume", 00:13:02.274 "block_size": 512, 00:13:02.274 "num_blocks": 63488, 00:13:02.274 "uuid": "9f22be9b-8b35-47b2-af01-badb32863e6a", 00:13:02.274 "assigned_rate_limits": { 00:13:02.274 "rw_ios_per_sec": 0, 00:13:02.274 "rw_mbytes_per_sec": 0, 00:13:02.274 "r_mbytes_per_sec": 0, 00:13:02.274 "w_mbytes_per_sec": 0 00:13:02.274 }, 00:13:02.274 "claimed": false, 00:13:02.274 "zoned": false, 00:13:02.274 "supported_io_types": { 00:13:02.274 "read": true, 00:13:02.274 "write": true, 00:13:02.274 "unmap": false, 00:13:02.274 "flush": false, 00:13:02.274 "reset": true, 00:13:02.274 "nvme_admin": false, 00:13:02.274 "nvme_io": false, 00:13:02.274 "nvme_io_md": false, 00:13:02.274 "write_zeroes": true, 00:13:02.274 "zcopy": false, 00:13:02.274 "get_zone_info": false, 00:13:02.274 "zone_management": false, 00:13:02.274 "zone_append": false, 00:13:02.274 "compare": false, 00:13:02.274 "compare_and_write": false, 00:13:02.274 "abort": false, 00:13:02.274 "seek_hole": false, 00:13:02.274 "seek_data": false, 00:13:02.274 "copy": false, 00:13:02.274 "nvme_iov_md": false 00:13:02.274 }, 00:13:02.274 "memory_domains": [ 00:13:02.274 { 00:13:02.274 "dma_device_id": "system", 00:13:02.274 "dma_device_type": 1 00:13:02.274 }, 00:13:02.274 { 00:13:02.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.274 "dma_device_type": 2 00:13:02.274 }, 00:13:02.274 { 00:13:02.274 "dma_device_id": "system", 00:13:02.274 "dma_device_type": 1 00:13:02.274 }, 00:13:02.274 { 00:13:02.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.274 "dma_device_type": 2 00:13:02.274 }, 00:13:02.274 { 00:13:02.274 "dma_device_id": "system", 00:13:02.274 "dma_device_type": 1 00:13:02.274 }, 00:13:02.274 { 00:13:02.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.274 "dma_device_type": 2 00:13:02.274 } 00:13:02.274 ], 00:13:02.274 "driver_specific": { 00:13:02.274 "raid": { 00:13:02.274 "uuid": "9f22be9b-8b35-47b2-af01-badb32863e6a", 00:13:02.274 "strip_size_kb": 0, 00:13:02.274 "state": "online", 00:13:02.274 "raid_level": "raid1", 00:13:02.274 "superblock": true, 00:13:02.274 "num_base_bdevs": 3, 00:13:02.274 "num_base_bdevs_discovered": 3, 00:13:02.274 "num_base_bdevs_operational": 3, 00:13:02.274 "base_bdevs_list": [ 00:13:02.274 { 00:13:02.274 "name": "BaseBdev1", 00:13:02.275 "uuid": "ea4ca1ec-d94e-42f9-8d4b-ef265f5b7c13", 00:13:02.275 "is_configured": true, 00:13:02.275 "data_offset": 2048, 00:13:02.275 "data_size": 63488 00:13:02.275 }, 00:13:02.275 { 00:13:02.275 "name": "BaseBdev2", 00:13:02.275 "uuid": "56c0c728-53bb-4e4f-9143-ca1fd1e1826c", 00:13:02.275 "is_configured": true, 00:13:02.275 "data_offset": 2048, 00:13:02.275 "data_size": 63488 00:13:02.275 }, 00:13:02.275 { 00:13:02.275 "name": "BaseBdev3", 00:13:02.275 "uuid": "a2acfe9b-9bb3-470a-982f-c3d0cb6a5dfe", 00:13:02.275 "is_configured": true, 00:13:02.275 "data_offset": 2048, 00:13:02.275 "data_size": 63488 00:13:02.275 } 00:13:02.275 ] 00:13:02.275 } 00:13:02.275 } 00:13:02.275 }' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:02.535 BaseBdev2 00:13:02.535 BaseBdev3' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.535 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.536 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.536 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.536 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:02.536 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.536 20:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.536 20:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.536 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.536 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.536 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.536 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:02.536 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.536 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.536 [2024-11-26 20:25:56.025972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.796 "name": "Existed_Raid", 00:13:02.796 "uuid": "9f22be9b-8b35-47b2-af01-badb32863e6a", 00:13:02.796 "strip_size_kb": 0, 00:13:02.796 "state": "online", 00:13:02.796 "raid_level": "raid1", 00:13:02.796 "superblock": true, 00:13:02.796 "num_base_bdevs": 3, 00:13:02.796 "num_base_bdevs_discovered": 2, 00:13:02.796 "num_base_bdevs_operational": 2, 00:13:02.796 "base_bdevs_list": [ 00:13:02.796 { 00:13:02.796 "name": null, 00:13:02.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.796 "is_configured": false, 00:13:02.796 "data_offset": 0, 00:13:02.796 "data_size": 63488 00:13:02.796 }, 00:13:02.796 { 00:13:02.796 "name": "BaseBdev2", 00:13:02.796 "uuid": "56c0c728-53bb-4e4f-9143-ca1fd1e1826c", 00:13:02.796 "is_configured": true, 00:13:02.796 "data_offset": 2048, 00:13:02.796 "data_size": 63488 00:13:02.796 }, 00:13:02.796 { 00:13:02.796 "name": "BaseBdev3", 00:13:02.796 "uuid": "a2acfe9b-9bb3-470a-982f-c3d0cb6a5dfe", 00:13:02.796 "is_configured": true, 00:13:02.796 "data_offset": 2048, 00:13:02.796 "data_size": 63488 00:13:02.796 } 00:13:02.796 ] 00:13:02.796 }' 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.796 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.056 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.056 [2024-11-26 20:25:56.590098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.315 [2024-11-26 20:25:56.746006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:03.315 [2024-11-26 20:25:56.746119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.315 [2024-11-26 20:25:56.851579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.315 [2024-11-26 20:25:56.851644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.315 [2024-11-26 20:25:56.851658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:03.315 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 BaseBdev2 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 [ 00:13:03.576 { 00:13:03.576 "name": "BaseBdev2", 00:13:03.576 "aliases": [ 00:13:03.576 "047ce10e-af58-47f4-8cf5-932e346b0969" 00:13:03.576 ], 00:13:03.576 "product_name": "Malloc disk", 00:13:03.576 "block_size": 512, 00:13:03.576 "num_blocks": 65536, 00:13:03.576 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:03.576 "assigned_rate_limits": { 00:13:03.576 "rw_ios_per_sec": 0, 00:13:03.576 "rw_mbytes_per_sec": 0, 00:13:03.576 "r_mbytes_per_sec": 0, 00:13:03.576 "w_mbytes_per_sec": 0 00:13:03.576 }, 00:13:03.576 "claimed": false, 00:13:03.576 "zoned": false, 00:13:03.576 "supported_io_types": { 00:13:03.576 "read": true, 00:13:03.576 "write": true, 00:13:03.576 "unmap": true, 00:13:03.576 "flush": true, 00:13:03.576 "reset": true, 00:13:03.576 "nvme_admin": false, 00:13:03.576 "nvme_io": false, 00:13:03.576 "nvme_io_md": false, 00:13:03.576 "write_zeroes": true, 00:13:03.576 "zcopy": true, 00:13:03.576 "get_zone_info": false, 00:13:03.576 "zone_management": false, 00:13:03.576 "zone_append": false, 00:13:03.576 "compare": false, 00:13:03.576 "compare_and_write": false, 00:13:03.576 "abort": true, 00:13:03.576 "seek_hole": false, 00:13:03.576 "seek_data": false, 00:13:03.576 "copy": true, 00:13:03.576 "nvme_iov_md": false 00:13:03.576 }, 00:13:03.576 "memory_domains": [ 00:13:03.576 { 00:13:03.576 "dma_device_id": "system", 00:13:03.576 "dma_device_type": 1 00:13:03.576 }, 00:13:03.576 { 00:13:03.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.576 "dma_device_type": 2 00:13:03.576 } 00:13:03.576 ], 00:13:03.576 "driver_specific": {} 00:13:03.576 } 00:13:03.576 ] 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 BaseBdev3 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 [ 00:13:03.576 { 00:13:03.576 "name": "BaseBdev3", 00:13:03.576 "aliases": [ 00:13:03.576 "35146360-1aa9-45ea-b291-c2d952d49a53" 00:13:03.576 ], 00:13:03.576 "product_name": "Malloc disk", 00:13:03.576 "block_size": 512, 00:13:03.576 "num_blocks": 65536, 00:13:03.576 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:03.576 "assigned_rate_limits": { 00:13:03.576 "rw_ios_per_sec": 0, 00:13:03.576 "rw_mbytes_per_sec": 0, 00:13:03.576 "r_mbytes_per_sec": 0, 00:13:03.576 "w_mbytes_per_sec": 0 00:13:03.576 }, 00:13:03.576 "claimed": false, 00:13:03.576 "zoned": false, 00:13:03.576 "supported_io_types": { 00:13:03.576 "read": true, 00:13:03.576 "write": true, 00:13:03.576 "unmap": true, 00:13:03.576 "flush": true, 00:13:03.576 "reset": true, 00:13:03.576 "nvme_admin": false, 00:13:03.576 "nvme_io": false, 00:13:03.576 "nvme_io_md": false, 00:13:03.576 "write_zeroes": true, 00:13:03.576 "zcopy": true, 00:13:03.576 "get_zone_info": false, 00:13:03.576 "zone_management": false, 00:13:03.576 "zone_append": false, 00:13:03.576 "compare": false, 00:13:03.576 "compare_and_write": false, 00:13:03.576 "abort": true, 00:13:03.576 "seek_hole": false, 00:13:03.576 "seek_data": false, 00:13:03.576 "copy": true, 00:13:03.576 "nvme_iov_md": false 00:13:03.576 }, 00:13:03.576 "memory_domains": [ 00:13:03.576 { 00:13:03.576 "dma_device_id": "system", 00:13:03.576 "dma_device_type": 1 00:13:03.576 }, 00:13:03.576 { 00:13:03.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.576 "dma_device_type": 2 00:13:03.576 } 00:13:03.576 ], 00:13:03.576 "driver_specific": {} 00:13:03.576 } 00:13:03.576 ] 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.576 [2024-11-26 20:25:57.074038] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.576 [2024-11-26 20:25:57.074130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.576 [2024-11-26 20:25:57.074173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.576 [2024-11-26 20:25:57.076178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.576 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.837 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.837 "name": "Existed_Raid", 00:13:03.837 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:03.837 "strip_size_kb": 0, 00:13:03.837 "state": "configuring", 00:13:03.837 "raid_level": "raid1", 00:13:03.837 "superblock": true, 00:13:03.837 "num_base_bdevs": 3, 00:13:03.837 "num_base_bdevs_discovered": 2, 00:13:03.837 "num_base_bdevs_operational": 3, 00:13:03.837 "base_bdevs_list": [ 00:13:03.837 { 00:13:03.837 "name": "BaseBdev1", 00:13:03.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.837 "is_configured": false, 00:13:03.837 "data_offset": 0, 00:13:03.837 "data_size": 0 00:13:03.837 }, 00:13:03.837 { 00:13:03.837 "name": "BaseBdev2", 00:13:03.837 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:03.837 "is_configured": true, 00:13:03.837 "data_offset": 2048, 00:13:03.837 "data_size": 63488 00:13:03.837 }, 00:13:03.837 { 00:13:03.837 "name": "BaseBdev3", 00:13:03.837 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:03.837 "is_configured": true, 00:13:03.837 "data_offset": 2048, 00:13:03.837 "data_size": 63488 00:13:03.837 } 00:13:03.837 ] 00:13:03.837 }' 00:13:03.837 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.837 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.096 [2024-11-26 20:25:57.557268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.096 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.096 "name": "Existed_Raid", 00:13:04.096 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:04.096 "strip_size_kb": 0, 00:13:04.096 "state": "configuring", 00:13:04.096 "raid_level": "raid1", 00:13:04.096 "superblock": true, 00:13:04.096 "num_base_bdevs": 3, 00:13:04.096 "num_base_bdevs_discovered": 1, 00:13:04.096 "num_base_bdevs_operational": 3, 00:13:04.096 "base_bdevs_list": [ 00:13:04.096 { 00:13:04.096 "name": "BaseBdev1", 00:13:04.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.096 "is_configured": false, 00:13:04.096 "data_offset": 0, 00:13:04.096 "data_size": 0 00:13:04.096 }, 00:13:04.096 { 00:13:04.096 "name": null, 00:13:04.096 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:04.096 "is_configured": false, 00:13:04.097 "data_offset": 0, 00:13:04.097 "data_size": 63488 00:13:04.097 }, 00:13:04.097 { 00:13:04.097 "name": "BaseBdev3", 00:13:04.097 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:04.097 "is_configured": true, 00:13:04.097 "data_offset": 2048, 00:13:04.097 "data_size": 63488 00:13:04.097 } 00:13:04.097 ] 00:13:04.097 }' 00:13:04.097 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.097 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.677 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.677 20:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.677 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.677 20:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.677 [2024-11-26 20:25:58.079325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.677 BaseBdev1 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.677 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.678 [ 00:13:04.678 { 00:13:04.678 "name": "BaseBdev1", 00:13:04.678 "aliases": [ 00:13:04.678 "1d1b3fa3-f223-4238-b9dc-6caf8586c94c" 00:13:04.678 ], 00:13:04.678 "product_name": "Malloc disk", 00:13:04.678 "block_size": 512, 00:13:04.678 "num_blocks": 65536, 00:13:04.678 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:04.678 "assigned_rate_limits": { 00:13:04.678 "rw_ios_per_sec": 0, 00:13:04.678 "rw_mbytes_per_sec": 0, 00:13:04.678 "r_mbytes_per_sec": 0, 00:13:04.678 "w_mbytes_per_sec": 0 00:13:04.678 }, 00:13:04.678 "claimed": true, 00:13:04.678 "claim_type": "exclusive_write", 00:13:04.678 "zoned": false, 00:13:04.678 "supported_io_types": { 00:13:04.678 "read": true, 00:13:04.678 "write": true, 00:13:04.678 "unmap": true, 00:13:04.678 "flush": true, 00:13:04.678 "reset": true, 00:13:04.678 "nvme_admin": false, 00:13:04.678 "nvme_io": false, 00:13:04.678 "nvme_io_md": false, 00:13:04.678 "write_zeroes": true, 00:13:04.678 "zcopy": true, 00:13:04.678 "get_zone_info": false, 00:13:04.678 "zone_management": false, 00:13:04.678 "zone_append": false, 00:13:04.678 "compare": false, 00:13:04.678 "compare_and_write": false, 00:13:04.678 "abort": true, 00:13:04.678 "seek_hole": false, 00:13:04.678 "seek_data": false, 00:13:04.678 "copy": true, 00:13:04.678 "nvme_iov_md": false 00:13:04.678 }, 00:13:04.678 "memory_domains": [ 00:13:04.678 { 00:13:04.678 "dma_device_id": "system", 00:13:04.678 "dma_device_type": 1 00:13:04.678 }, 00:13:04.678 { 00:13:04.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.678 "dma_device_type": 2 00:13:04.678 } 00:13:04.678 ], 00:13:04.678 "driver_specific": {} 00:13:04.678 } 00:13:04.678 ] 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.678 "name": "Existed_Raid", 00:13:04.678 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:04.678 "strip_size_kb": 0, 00:13:04.678 "state": "configuring", 00:13:04.678 "raid_level": "raid1", 00:13:04.678 "superblock": true, 00:13:04.678 "num_base_bdevs": 3, 00:13:04.678 "num_base_bdevs_discovered": 2, 00:13:04.678 "num_base_bdevs_operational": 3, 00:13:04.678 "base_bdevs_list": [ 00:13:04.678 { 00:13:04.678 "name": "BaseBdev1", 00:13:04.678 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:04.678 "is_configured": true, 00:13:04.678 "data_offset": 2048, 00:13:04.678 "data_size": 63488 00:13:04.678 }, 00:13:04.678 { 00:13:04.678 "name": null, 00:13:04.678 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:04.678 "is_configured": false, 00:13:04.678 "data_offset": 0, 00:13:04.678 "data_size": 63488 00:13:04.678 }, 00:13:04.678 { 00:13:04.678 "name": "BaseBdev3", 00:13:04.678 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:04.678 "is_configured": true, 00:13:04.678 "data_offset": 2048, 00:13:04.678 "data_size": 63488 00:13:04.678 } 00:13:04.678 ] 00:13:04.678 }' 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.678 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.247 [2024-11-26 20:25:58.618461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.247 "name": "Existed_Raid", 00:13:05.247 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:05.247 "strip_size_kb": 0, 00:13:05.247 "state": "configuring", 00:13:05.247 "raid_level": "raid1", 00:13:05.247 "superblock": true, 00:13:05.247 "num_base_bdevs": 3, 00:13:05.247 "num_base_bdevs_discovered": 1, 00:13:05.247 "num_base_bdevs_operational": 3, 00:13:05.247 "base_bdevs_list": [ 00:13:05.247 { 00:13:05.247 "name": "BaseBdev1", 00:13:05.247 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:05.247 "is_configured": true, 00:13:05.247 "data_offset": 2048, 00:13:05.247 "data_size": 63488 00:13:05.247 }, 00:13:05.247 { 00:13:05.247 "name": null, 00:13:05.247 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:05.247 "is_configured": false, 00:13:05.247 "data_offset": 0, 00:13:05.247 "data_size": 63488 00:13:05.247 }, 00:13:05.247 { 00:13:05.247 "name": null, 00:13:05.247 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:05.247 "is_configured": false, 00:13:05.247 "data_offset": 0, 00:13:05.247 "data_size": 63488 00:13:05.247 } 00:13:05.247 ] 00:13:05.247 }' 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.247 20:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.819 [2024-11-26 20:25:59.145639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.819 "name": "Existed_Raid", 00:13:05.819 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:05.819 "strip_size_kb": 0, 00:13:05.819 "state": "configuring", 00:13:05.819 "raid_level": "raid1", 00:13:05.819 "superblock": true, 00:13:05.819 "num_base_bdevs": 3, 00:13:05.819 "num_base_bdevs_discovered": 2, 00:13:05.819 "num_base_bdevs_operational": 3, 00:13:05.819 "base_bdevs_list": [ 00:13:05.819 { 00:13:05.819 "name": "BaseBdev1", 00:13:05.819 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:05.819 "is_configured": true, 00:13:05.819 "data_offset": 2048, 00:13:05.819 "data_size": 63488 00:13:05.819 }, 00:13:05.819 { 00:13:05.819 "name": null, 00:13:05.819 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:05.819 "is_configured": false, 00:13:05.819 "data_offset": 0, 00:13:05.819 "data_size": 63488 00:13:05.819 }, 00:13:05.819 { 00:13:05.819 "name": "BaseBdev3", 00:13:05.819 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:05.819 "is_configured": true, 00:13:05.819 "data_offset": 2048, 00:13:05.819 "data_size": 63488 00:13:05.819 } 00:13:05.819 ] 00:13:05.819 }' 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.819 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.078 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.078 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.078 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.078 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.078 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.336 [2024-11-26 20:25:59.656768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.336 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.337 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.337 "name": "Existed_Raid", 00:13:06.337 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:06.337 "strip_size_kb": 0, 00:13:06.337 "state": "configuring", 00:13:06.337 "raid_level": "raid1", 00:13:06.337 "superblock": true, 00:13:06.337 "num_base_bdevs": 3, 00:13:06.337 "num_base_bdevs_discovered": 1, 00:13:06.337 "num_base_bdevs_operational": 3, 00:13:06.337 "base_bdevs_list": [ 00:13:06.337 { 00:13:06.337 "name": null, 00:13:06.337 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:06.337 "is_configured": false, 00:13:06.337 "data_offset": 0, 00:13:06.337 "data_size": 63488 00:13:06.337 }, 00:13:06.337 { 00:13:06.337 "name": null, 00:13:06.337 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:06.337 "is_configured": false, 00:13:06.337 "data_offset": 0, 00:13:06.337 "data_size": 63488 00:13:06.337 }, 00:13:06.337 { 00:13:06.337 "name": "BaseBdev3", 00:13:06.337 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:06.337 "is_configured": true, 00:13:06.337 "data_offset": 2048, 00:13:06.337 "data_size": 63488 00:13:06.337 } 00:13:06.337 ] 00:13:06.337 }' 00:13:06.337 20:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.337 20:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.904 [2024-11-26 20:26:00.290596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.904 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.904 "name": "Existed_Raid", 00:13:06.904 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:06.905 "strip_size_kb": 0, 00:13:06.905 "state": "configuring", 00:13:06.905 "raid_level": "raid1", 00:13:06.905 "superblock": true, 00:13:06.905 "num_base_bdevs": 3, 00:13:06.905 "num_base_bdevs_discovered": 2, 00:13:06.905 "num_base_bdevs_operational": 3, 00:13:06.905 "base_bdevs_list": [ 00:13:06.905 { 00:13:06.905 "name": null, 00:13:06.905 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:06.905 "is_configured": false, 00:13:06.905 "data_offset": 0, 00:13:06.905 "data_size": 63488 00:13:06.905 }, 00:13:06.905 { 00:13:06.905 "name": "BaseBdev2", 00:13:06.905 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:06.905 "is_configured": true, 00:13:06.905 "data_offset": 2048, 00:13:06.905 "data_size": 63488 00:13:06.905 }, 00:13:06.905 { 00:13:06.905 "name": "BaseBdev3", 00:13:06.905 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:06.905 "is_configured": true, 00:13:06.905 "data_offset": 2048, 00:13:06.905 "data_size": 63488 00:13:06.905 } 00:13:06.905 ] 00:13:06.905 }' 00:13:06.905 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.905 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.474 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.474 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.474 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1d1b3fa3-f223-4238-b9dc-6caf8586c94c 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 [2024-11-26 20:26:00.863723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.475 [2024-11-26 20:26:00.864015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.475 [2024-11-26 20:26:00.864061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.475 [2024-11-26 20:26:00.864345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:07.475 NewBaseBdev 00:13:07.475 [2024-11-26 20:26:00.864519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.475 [2024-11-26 20:26:00.864533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:07.475 [2024-11-26 20:26:00.864716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 [ 00:13:07.475 { 00:13:07.475 "name": "NewBaseBdev", 00:13:07.475 "aliases": [ 00:13:07.475 "1d1b3fa3-f223-4238-b9dc-6caf8586c94c" 00:13:07.475 ], 00:13:07.475 "product_name": "Malloc disk", 00:13:07.475 "block_size": 512, 00:13:07.475 "num_blocks": 65536, 00:13:07.475 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:07.475 "assigned_rate_limits": { 00:13:07.475 "rw_ios_per_sec": 0, 00:13:07.475 "rw_mbytes_per_sec": 0, 00:13:07.475 "r_mbytes_per_sec": 0, 00:13:07.475 "w_mbytes_per_sec": 0 00:13:07.475 }, 00:13:07.475 "claimed": true, 00:13:07.475 "claim_type": "exclusive_write", 00:13:07.475 "zoned": false, 00:13:07.475 "supported_io_types": { 00:13:07.475 "read": true, 00:13:07.475 "write": true, 00:13:07.475 "unmap": true, 00:13:07.475 "flush": true, 00:13:07.475 "reset": true, 00:13:07.475 "nvme_admin": false, 00:13:07.475 "nvme_io": false, 00:13:07.475 "nvme_io_md": false, 00:13:07.475 "write_zeroes": true, 00:13:07.475 "zcopy": true, 00:13:07.475 "get_zone_info": false, 00:13:07.475 "zone_management": false, 00:13:07.475 "zone_append": false, 00:13:07.475 "compare": false, 00:13:07.475 "compare_and_write": false, 00:13:07.475 "abort": true, 00:13:07.475 "seek_hole": false, 00:13:07.475 "seek_data": false, 00:13:07.475 "copy": true, 00:13:07.475 "nvme_iov_md": false 00:13:07.475 }, 00:13:07.475 "memory_domains": [ 00:13:07.475 { 00:13:07.475 "dma_device_id": "system", 00:13:07.475 "dma_device_type": 1 00:13:07.475 }, 00:13:07.475 { 00:13:07.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.475 "dma_device_type": 2 00:13:07.475 } 00:13:07.475 ], 00:13:07.475 "driver_specific": {} 00:13:07.475 } 00:13:07.475 ] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.475 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.475 "name": "Existed_Raid", 00:13:07.475 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:07.475 "strip_size_kb": 0, 00:13:07.475 "state": "online", 00:13:07.475 "raid_level": "raid1", 00:13:07.475 "superblock": true, 00:13:07.475 "num_base_bdevs": 3, 00:13:07.475 "num_base_bdevs_discovered": 3, 00:13:07.475 "num_base_bdevs_operational": 3, 00:13:07.475 "base_bdevs_list": [ 00:13:07.475 { 00:13:07.475 "name": "NewBaseBdev", 00:13:07.475 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:07.475 "is_configured": true, 00:13:07.475 "data_offset": 2048, 00:13:07.475 "data_size": 63488 00:13:07.475 }, 00:13:07.475 { 00:13:07.475 "name": "BaseBdev2", 00:13:07.475 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:07.475 "is_configured": true, 00:13:07.475 "data_offset": 2048, 00:13:07.475 "data_size": 63488 00:13:07.475 }, 00:13:07.475 { 00:13:07.475 "name": "BaseBdev3", 00:13:07.475 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:07.475 "is_configured": true, 00:13:07.475 "data_offset": 2048, 00:13:07.475 "data_size": 63488 00:13:07.475 } 00:13:07.475 ] 00:13:07.475 }' 00:13:07.476 20:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.476 20:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.045 [2024-11-26 20:26:01.391208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.045 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.045 "name": "Existed_Raid", 00:13:08.045 "aliases": [ 00:13:08.045 "a86b3e42-556b-47b2-a378-c7d4c16aa00a" 00:13:08.045 ], 00:13:08.045 "product_name": "Raid Volume", 00:13:08.045 "block_size": 512, 00:13:08.045 "num_blocks": 63488, 00:13:08.045 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:08.045 "assigned_rate_limits": { 00:13:08.045 "rw_ios_per_sec": 0, 00:13:08.045 "rw_mbytes_per_sec": 0, 00:13:08.045 "r_mbytes_per_sec": 0, 00:13:08.045 "w_mbytes_per_sec": 0 00:13:08.045 }, 00:13:08.045 "claimed": false, 00:13:08.045 "zoned": false, 00:13:08.045 "supported_io_types": { 00:13:08.046 "read": true, 00:13:08.046 "write": true, 00:13:08.046 "unmap": false, 00:13:08.046 "flush": false, 00:13:08.046 "reset": true, 00:13:08.046 "nvme_admin": false, 00:13:08.046 "nvme_io": false, 00:13:08.046 "nvme_io_md": false, 00:13:08.046 "write_zeroes": true, 00:13:08.046 "zcopy": false, 00:13:08.046 "get_zone_info": false, 00:13:08.046 "zone_management": false, 00:13:08.046 "zone_append": false, 00:13:08.046 "compare": false, 00:13:08.046 "compare_and_write": false, 00:13:08.046 "abort": false, 00:13:08.046 "seek_hole": false, 00:13:08.046 "seek_data": false, 00:13:08.046 "copy": false, 00:13:08.046 "nvme_iov_md": false 00:13:08.046 }, 00:13:08.046 "memory_domains": [ 00:13:08.046 { 00:13:08.046 "dma_device_id": "system", 00:13:08.046 "dma_device_type": 1 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.046 "dma_device_type": 2 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "dma_device_id": "system", 00:13:08.046 "dma_device_type": 1 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.046 "dma_device_type": 2 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "dma_device_id": "system", 00:13:08.046 "dma_device_type": 1 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.046 "dma_device_type": 2 00:13:08.046 } 00:13:08.046 ], 00:13:08.046 "driver_specific": { 00:13:08.046 "raid": { 00:13:08.046 "uuid": "a86b3e42-556b-47b2-a378-c7d4c16aa00a", 00:13:08.046 "strip_size_kb": 0, 00:13:08.046 "state": "online", 00:13:08.046 "raid_level": "raid1", 00:13:08.046 "superblock": true, 00:13:08.046 "num_base_bdevs": 3, 00:13:08.046 "num_base_bdevs_discovered": 3, 00:13:08.046 "num_base_bdevs_operational": 3, 00:13:08.046 "base_bdevs_list": [ 00:13:08.046 { 00:13:08.046 "name": "NewBaseBdev", 00:13:08.046 "uuid": "1d1b3fa3-f223-4238-b9dc-6caf8586c94c", 00:13:08.046 "is_configured": true, 00:13:08.046 "data_offset": 2048, 00:13:08.046 "data_size": 63488 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "name": "BaseBdev2", 00:13:08.046 "uuid": "047ce10e-af58-47f4-8cf5-932e346b0969", 00:13:08.046 "is_configured": true, 00:13:08.046 "data_offset": 2048, 00:13:08.046 "data_size": 63488 00:13:08.046 }, 00:13:08.046 { 00:13:08.046 "name": "BaseBdev3", 00:13:08.046 "uuid": "35146360-1aa9-45ea-b291-c2d952d49a53", 00:13:08.046 "is_configured": true, 00:13:08.046 "data_offset": 2048, 00:13:08.046 "data_size": 63488 00:13:08.046 } 00:13:08.046 ] 00:13:08.046 } 00:13:08.046 } 00:13:08.046 }' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:08.046 BaseBdev2 00:13:08.046 BaseBdev3' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.046 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.305 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.306 [2024-11-26 20:26:01.658442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.306 [2024-11-26 20:26:01.658478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.306 [2024-11-26 20:26:01.658562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.306 [2024-11-26 20:26:01.658879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.306 [2024-11-26 20:26:01.658891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68325 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68325 ']' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68325 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68325 00:13:08.306 killing process with pid 68325 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68325' 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68325 00:13:08.306 [2024-11-26 20:26:01.693897] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.306 20:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68325 00:13:08.565 [2024-11-26 20:26:02.027286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.944 20:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.944 00:13:09.944 real 0m10.851s 00:13:09.944 user 0m17.291s 00:13:09.944 sys 0m1.791s 00:13:09.944 20:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.944 20:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.944 ************************************ 00:13:09.944 END TEST raid_state_function_test_sb 00:13:09.944 ************************************ 00:13:09.944 20:26:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:09.944 20:26:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.944 20:26:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.944 20:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.944 ************************************ 00:13:09.944 START TEST raid_superblock_test 00:13:09.944 ************************************ 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68951 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68951 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68951 ']' 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.944 20:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.944 [2024-11-26 20:26:03.364497] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:09.944 [2024-11-26 20:26:03.364725] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68951 ] 00:13:10.204 [2024-11-26 20:26:03.526613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.204 [2024-11-26 20:26:03.645301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.461 [2024-11-26 20:26:03.850300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.461 [2024-11-26 20:26:03.850367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 malloc1 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 [2024-11-26 20:26:04.257614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:10.719 [2024-11-26 20:26:04.257749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.719 [2024-11-26 20:26:04.257802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.719 [2024-11-26 20:26:04.257842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.719 [2024-11-26 20:26:04.260146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.719 [2024-11-26 20:26:04.260265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.719 pt1 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.719 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 malloc2 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 [2024-11-26 20:26:04.324098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.979 [2024-11-26 20:26:04.324222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.979 [2024-11-26 20:26:04.324311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.979 [2024-11-26 20:26:04.324386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.979 [2024-11-26 20:26:04.326940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.979 [2024-11-26 20:26:04.326980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.979 pt2 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 malloc3 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 [2024-11-26 20:26:04.395731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:10.979 [2024-11-26 20:26:04.395841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.979 [2024-11-26 20:26:04.395889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:10.979 [2024-11-26 20:26:04.395965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.979 [2024-11-26 20:26:04.398308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.979 [2024-11-26 20:26:04.398378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:10.979 pt3 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.979 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 [2024-11-26 20:26:04.407790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.979 [2024-11-26 20:26:04.409764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.979 [2024-11-26 20:26:04.409937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:10.980 [2024-11-26 20:26:04.410226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.980 [2024-11-26 20:26:04.410305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.980 [2024-11-26 20:26:04.410669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:10.980 [2024-11-26 20:26:04.410919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:10.980 [2024-11-26 20:26:04.410971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:10.980 [2024-11-26 20:26:04.411232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.980 "name": "raid_bdev1", 00:13:10.980 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:10.980 "strip_size_kb": 0, 00:13:10.980 "state": "online", 00:13:10.980 "raid_level": "raid1", 00:13:10.980 "superblock": true, 00:13:10.980 "num_base_bdevs": 3, 00:13:10.980 "num_base_bdevs_discovered": 3, 00:13:10.980 "num_base_bdevs_operational": 3, 00:13:10.980 "base_bdevs_list": [ 00:13:10.980 { 00:13:10.980 "name": "pt1", 00:13:10.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.980 "is_configured": true, 00:13:10.980 "data_offset": 2048, 00:13:10.980 "data_size": 63488 00:13:10.980 }, 00:13:10.980 { 00:13:10.980 "name": "pt2", 00:13:10.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.980 "is_configured": true, 00:13:10.980 "data_offset": 2048, 00:13:10.980 "data_size": 63488 00:13:10.980 }, 00:13:10.980 { 00:13:10.980 "name": "pt3", 00:13:10.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.980 "is_configured": true, 00:13:10.980 "data_offset": 2048, 00:13:10.980 "data_size": 63488 00:13:10.980 } 00:13:10.980 ] 00:13:10.980 }' 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.980 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.548 [2024-11-26 20:26:04.855325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.548 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.548 "name": "raid_bdev1", 00:13:11.548 "aliases": [ 00:13:11.548 "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08" 00:13:11.548 ], 00:13:11.548 "product_name": "Raid Volume", 00:13:11.548 "block_size": 512, 00:13:11.548 "num_blocks": 63488, 00:13:11.548 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:11.548 "assigned_rate_limits": { 00:13:11.548 "rw_ios_per_sec": 0, 00:13:11.548 "rw_mbytes_per_sec": 0, 00:13:11.548 "r_mbytes_per_sec": 0, 00:13:11.548 "w_mbytes_per_sec": 0 00:13:11.548 }, 00:13:11.548 "claimed": false, 00:13:11.548 "zoned": false, 00:13:11.548 "supported_io_types": { 00:13:11.548 "read": true, 00:13:11.548 "write": true, 00:13:11.548 "unmap": false, 00:13:11.548 "flush": false, 00:13:11.548 "reset": true, 00:13:11.548 "nvme_admin": false, 00:13:11.548 "nvme_io": false, 00:13:11.548 "nvme_io_md": false, 00:13:11.548 "write_zeroes": true, 00:13:11.548 "zcopy": false, 00:13:11.548 "get_zone_info": false, 00:13:11.548 "zone_management": false, 00:13:11.548 "zone_append": false, 00:13:11.548 "compare": false, 00:13:11.548 "compare_and_write": false, 00:13:11.548 "abort": false, 00:13:11.548 "seek_hole": false, 00:13:11.548 "seek_data": false, 00:13:11.548 "copy": false, 00:13:11.548 "nvme_iov_md": false 00:13:11.548 }, 00:13:11.548 "memory_domains": [ 00:13:11.548 { 00:13:11.548 "dma_device_id": "system", 00:13:11.548 "dma_device_type": 1 00:13:11.548 }, 00:13:11.548 { 00:13:11.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.548 "dma_device_type": 2 00:13:11.548 }, 00:13:11.548 { 00:13:11.548 "dma_device_id": "system", 00:13:11.548 "dma_device_type": 1 00:13:11.548 }, 00:13:11.548 { 00:13:11.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.548 "dma_device_type": 2 00:13:11.548 }, 00:13:11.548 { 00:13:11.548 "dma_device_id": "system", 00:13:11.548 "dma_device_type": 1 00:13:11.548 }, 00:13:11.548 { 00:13:11.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.548 "dma_device_type": 2 00:13:11.548 } 00:13:11.548 ], 00:13:11.548 "driver_specific": { 00:13:11.548 "raid": { 00:13:11.549 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:11.549 "strip_size_kb": 0, 00:13:11.549 "state": "online", 00:13:11.549 "raid_level": "raid1", 00:13:11.549 "superblock": true, 00:13:11.549 "num_base_bdevs": 3, 00:13:11.549 "num_base_bdevs_discovered": 3, 00:13:11.549 "num_base_bdevs_operational": 3, 00:13:11.549 "base_bdevs_list": [ 00:13:11.549 { 00:13:11.549 "name": "pt1", 00:13:11.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.549 "is_configured": true, 00:13:11.549 "data_offset": 2048, 00:13:11.549 "data_size": 63488 00:13:11.549 }, 00:13:11.549 { 00:13:11.549 "name": "pt2", 00:13:11.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.549 "is_configured": true, 00:13:11.549 "data_offset": 2048, 00:13:11.549 "data_size": 63488 00:13:11.549 }, 00:13:11.549 { 00:13:11.549 "name": "pt3", 00:13:11.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.549 "is_configured": true, 00:13:11.549 "data_offset": 2048, 00:13:11.549 "data_size": 63488 00:13:11.549 } 00:13:11.549 ] 00:13:11.549 } 00:13:11.549 } 00:13:11.549 }' 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:11.549 pt2 00:13:11.549 pt3' 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.549 20:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.549 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.808 [2024-11-26 20:26:05.122825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08 ']' 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.808 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.808 [2024-11-26 20:26:05.154448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.808 [2024-11-26 20:26:05.154486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.808 [2024-11-26 20:26:05.154562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.808 [2024-11-26 20:26:05.154636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.809 [2024-11-26 20:26:05.154646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 [2024-11-26 20:26:05.302284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:11.809 [2024-11-26 20:26:05.304168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:11.809 [2024-11-26 20:26:05.304232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:11.809 [2024-11-26 20:26:05.304317] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:11.809 [2024-11-26 20:26:05.304380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:11.809 [2024-11-26 20:26:05.304402] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:11.809 [2024-11-26 20:26:05.304420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.809 [2024-11-26 20:26:05.304431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:11.809 request: 00:13:11.809 { 00:13:11.809 "name": "raid_bdev1", 00:13:11.809 "raid_level": "raid1", 00:13:11.809 "base_bdevs": [ 00:13:11.809 "malloc1", 00:13:11.809 "malloc2", 00:13:11.809 "malloc3" 00:13:11.809 ], 00:13:11.809 "superblock": false, 00:13:11.809 "method": "bdev_raid_create", 00:13:11.809 "req_id": 1 00:13:11.809 } 00:13:11.809 Got JSON-RPC error response 00:13:11.809 response: 00:13:11.809 { 00:13:11.809 "code": -17, 00:13:11.809 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:11.809 } 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.809 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.103 [2024-11-26 20:26:05.370109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.103 [2024-11-26 20:26:05.370235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.103 [2024-11-26 20:26:05.370323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:12.103 [2024-11-26 20:26:05.370380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.103 [2024-11-26 20:26:05.372900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.103 [2024-11-26 20:26:05.372982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.103 [2024-11-26 20:26:05.373136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:12.103 [2024-11-26 20:26:05.373259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.103 pt1 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.103 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.104 "name": "raid_bdev1", 00:13:12.104 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:12.104 "strip_size_kb": 0, 00:13:12.104 "state": "configuring", 00:13:12.104 "raid_level": "raid1", 00:13:12.104 "superblock": true, 00:13:12.104 "num_base_bdevs": 3, 00:13:12.104 "num_base_bdevs_discovered": 1, 00:13:12.104 "num_base_bdevs_operational": 3, 00:13:12.104 "base_bdevs_list": [ 00:13:12.104 { 00:13:12.104 "name": "pt1", 00:13:12.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.104 "is_configured": true, 00:13:12.104 "data_offset": 2048, 00:13:12.104 "data_size": 63488 00:13:12.104 }, 00:13:12.104 { 00:13:12.104 "name": null, 00:13:12.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.104 "is_configured": false, 00:13:12.104 "data_offset": 2048, 00:13:12.104 "data_size": 63488 00:13:12.104 }, 00:13:12.104 { 00:13:12.104 "name": null, 00:13:12.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.104 "is_configured": false, 00:13:12.104 "data_offset": 2048, 00:13:12.104 "data_size": 63488 00:13:12.104 } 00:13:12.104 ] 00:13:12.104 }' 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.104 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.364 [2024-11-26 20:26:05.813375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.364 [2024-11-26 20:26:05.813518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.364 [2024-11-26 20:26:05.813550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:12.364 [2024-11-26 20:26:05.813561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.364 [2024-11-26 20:26:05.814068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.364 [2024-11-26 20:26:05.814096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.364 [2024-11-26 20:26:05.814209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.364 [2024-11-26 20:26:05.814256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.364 pt2 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.364 [2024-11-26 20:26:05.825338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.364 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.365 "name": "raid_bdev1", 00:13:12.365 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:12.365 "strip_size_kb": 0, 00:13:12.365 "state": "configuring", 00:13:12.365 "raid_level": "raid1", 00:13:12.365 "superblock": true, 00:13:12.365 "num_base_bdevs": 3, 00:13:12.365 "num_base_bdevs_discovered": 1, 00:13:12.365 "num_base_bdevs_operational": 3, 00:13:12.365 "base_bdevs_list": [ 00:13:12.365 { 00:13:12.365 "name": "pt1", 00:13:12.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.365 "is_configured": true, 00:13:12.365 "data_offset": 2048, 00:13:12.365 "data_size": 63488 00:13:12.365 }, 00:13:12.365 { 00:13:12.365 "name": null, 00:13:12.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.365 "is_configured": false, 00:13:12.365 "data_offset": 0, 00:13:12.365 "data_size": 63488 00:13:12.365 }, 00:13:12.365 { 00:13:12.365 "name": null, 00:13:12.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.365 "is_configured": false, 00:13:12.365 "data_offset": 2048, 00:13:12.365 "data_size": 63488 00:13:12.365 } 00:13:12.365 ] 00:13:12.365 }' 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.365 20:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.934 [2024-11-26 20:26:06.268603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.934 [2024-11-26 20:26:06.268731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.934 [2024-11-26 20:26:06.268775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:12.934 [2024-11-26 20:26:06.268822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.934 [2024-11-26 20:26:06.269394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.934 [2024-11-26 20:26:06.269473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.934 [2024-11-26 20:26:06.269621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.934 [2024-11-26 20:26:06.269706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.934 pt2 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.934 [2024-11-26 20:26:06.280564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.934 [2024-11-26 20:26:06.280658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.934 [2024-11-26 20:26:06.280716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:12.934 [2024-11-26 20:26:06.280769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.934 [2024-11-26 20:26:06.281281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.934 [2024-11-26 20:26:06.281358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.934 [2024-11-26 20:26:06.281493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:12.934 [2024-11-26 20:26:06.281562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.934 [2024-11-26 20:26:06.281778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:12.934 [2024-11-26 20:26:06.281803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.934 [2024-11-26 20:26:06.282065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:12.934 [2024-11-26 20:26:06.282263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:12.934 [2024-11-26 20:26:06.282274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:12.934 [2024-11-26 20:26:06.282459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.934 pt3 00:13:12.934 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.935 "name": "raid_bdev1", 00:13:12.935 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:12.935 "strip_size_kb": 0, 00:13:12.935 "state": "online", 00:13:12.935 "raid_level": "raid1", 00:13:12.935 "superblock": true, 00:13:12.935 "num_base_bdevs": 3, 00:13:12.935 "num_base_bdevs_discovered": 3, 00:13:12.935 "num_base_bdevs_operational": 3, 00:13:12.935 "base_bdevs_list": [ 00:13:12.935 { 00:13:12.935 "name": "pt1", 00:13:12.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.935 "is_configured": true, 00:13:12.935 "data_offset": 2048, 00:13:12.935 "data_size": 63488 00:13:12.935 }, 00:13:12.935 { 00:13:12.935 "name": "pt2", 00:13:12.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.935 "is_configured": true, 00:13:12.935 "data_offset": 2048, 00:13:12.935 "data_size": 63488 00:13:12.935 }, 00:13:12.935 { 00:13:12.935 "name": "pt3", 00:13:12.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.935 "is_configured": true, 00:13:12.935 "data_offset": 2048, 00:13:12.935 "data_size": 63488 00:13:12.935 } 00:13:12.935 ] 00:13:12.935 }' 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.935 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.194 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.194 [2024-11-26 20:26:06.740210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.454 "name": "raid_bdev1", 00:13:13.454 "aliases": [ 00:13:13.454 "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08" 00:13:13.454 ], 00:13:13.454 "product_name": "Raid Volume", 00:13:13.454 "block_size": 512, 00:13:13.454 "num_blocks": 63488, 00:13:13.454 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:13.454 "assigned_rate_limits": { 00:13:13.454 "rw_ios_per_sec": 0, 00:13:13.454 "rw_mbytes_per_sec": 0, 00:13:13.454 "r_mbytes_per_sec": 0, 00:13:13.454 "w_mbytes_per_sec": 0 00:13:13.454 }, 00:13:13.454 "claimed": false, 00:13:13.454 "zoned": false, 00:13:13.454 "supported_io_types": { 00:13:13.454 "read": true, 00:13:13.454 "write": true, 00:13:13.454 "unmap": false, 00:13:13.454 "flush": false, 00:13:13.454 "reset": true, 00:13:13.454 "nvme_admin": false, 00:13:13.454 "nvme_io": false, 00:13:13.454 "nvme_io_md": false, 00:13:13.454 "write_zeroes": true, 00:13:13.454 "zcopy": false, 00:13:13.454 "get_zone_info": false, 00:13:13.454 "zone_management": false, 00:13:13.454 "zone_append": false, 00:13:13.454 "compare": false, 00:13:13.454 "compare_and_write": false, 00:13:13.454 "abort": false, 00:13:13.454 "seek_hole": false, 00:13:13.454 "seek_data": false, 00:13:13.454 "copy": false, 00:13:13.454 "nvme_iov_md": false 00:13:13.454 }, 00:13:13.454 "memory_domains": [ 00:13:13.454 { 00:13:13.454 "dma_device_id": "system", 00:13:13.454 "dma_device_type": 1 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.454 "dma_device_type": 2 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "dma_device_id": "system", 00:13:13.454 "dma_device_type": 1 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.454 "dma_device_type": 2 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "dma_device_id": "system", 00:13:13.454 "dma_device_type": 1 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.454 "dma_device_type": 2 00:13:13.454 } 00:13:13.454 ], 00:13:13.454 "driver_specific": { 00:13:13.454 "raid": { 00:13:13.454 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:13.454 "strip_size_kb": 0, 00:13:13.454 "state": "online", 00:13:13.454 "raid_level": "raid1", 00:13:13.454 "superblock": true, 00:13:13.454 "num_base_bdevs": 3, 00:13:13.454 "num_base_bdevs_discovered": 3, 00:13:13.454 "num_base_bdevs_operational": 3, 00:13:13.454 "base_bdevs_list": [ 00:13:13.454 { 00:13:13.454 "name": "pt1", 00:13:13.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.454 "is_configured": true, 00:13:13.454 "data_offset": 2048, 00:13:13.454 "data_size": 63488 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "name": "pt2", 00:13:13.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.454 "is_configured": true, 00:13:13.454 "data_offset": 2048, 00:13:13.454 "data_size": 63488 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "name": "pt3", 00:13:13.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.454 "is_configured": true, 00:13:13.454 "data_offset": 2048, 00:13:13.454 "data_size": 63488 00:13:13.454 } 00:13:13.454 ] 00:13:13.454 } 00:13:13.454 } 00:13:13.454 }' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:13.454 pt2 00:13:13.454 pt3' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.454 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:13.455 20:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.455 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.455 20:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.455 [2024-11-26 20:26:06.979720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08 '!=' 5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08 ']' 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.455 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.713 [2024-11-26 20:26:07.011451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.713 "name": "raid_bdev1", 00:13:13.713 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:13.713 "strip_size_kb": 0, 00:13:13.713 "state": "online", 00:13:13.713 "raid_level": "raid1", 00:13:13.713 "superblock": true, 00:13:13.713 "num_base_bdevs": 3, 00:13:13.713 "num_base_bdevs_discovered": 2, 00:13:13.713 "num_base_bdevs_operational": 2, 00:13:13.713 "base_bdevs_list": [ 00:13:13.713 { 00:13:13.713 "name": null, 00:13:13.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.713 "is_configured": false, 00:13:13.713 "data_offset": 0, 00:13:13.713 "data_size": 63488 00:13:13.713 }, 00:13:13.713 { 00:13:13.713 "name": "pt2", 00:13:13.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.713 "is_configured": true, 00:13:13.713 "data_offset": 2048, 00:13:13.713 "data_size": 63488 00:13:13.713 }, 00:13:13.713 { 00:13:13.713 "name": "pt3", 00:13:13.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.713 "is_configured": true, 00:13:13.713 "data_offset": 2048, 00:13:13.713 "data_size": 63488 00:13:13.713 } 00:13:13.713 ] 00:13:13.713 }' 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.713 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 [2024-11-26 20:26:07.426680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.972 [2024-11-26 20:26:07.426777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.972 [2024-11-26 20:26:07.426915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.972 [2024-11-26 20:26:07.427020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.972 [2024-11-26 20:26:07.427099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.972 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 [2024-11-26 20:26:07.510486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.972 [2024-11-26 20:26:07.510548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.972 [2024-11-26 20:26:07.510566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:13.973 [2024-11-26 20:26:07.510577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.973 [2024-11-26 20:26:07.512845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.973 [2024-11-26 20:26:07.512904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.973 [2024-11-26 20:26:07.512992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.973 [2024-11-26 20:26:07.513046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.973 pt2 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.973 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.232 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.232 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.232 "name": "raid_bdev1", 00:13:14.232 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:14.232 "strip_size_kb": 0, 00:13:14.232 "state": "configuring", 00:13:14.232 "raid_level": "raid1", 00:13:14.232 "superblock": true, 00:13:14.232 "num_base_bdevs": 3, 00:13:14.232 "num_base_bdevs_discovered": 1, 00:13:14.232 "num_base_bdevs_operational": 2, 00:13:14.232 "base_bdevs_list": [ 00:13:14.232 { 00:13:14.232 "name": null, 00:13:14.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.232 "is_configured": false, 00:13:14.232 "data_offset": 2048, 00:13:14.232 "data_size": 63488 00:13:14.232 }, 00:13:14.232 { 00:13:14.232 "name": "pt2", 00:13:14.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.232 "is_configured": true, 00:13:14.232 "data_offset": 2048, 00:13:14.232 "data_size": 63488 00:13:14.232 }, 00:13:14.232 { 00:13:14.232 "name": null, 00:13:14.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.232 "is_configured": false, 00:13:14.232 "data_offset": 2048, 00:13:14.232 "data_size": 63488 00:13:14.232 } 00:13:14.232 ] 00:13:14.232 }' 00:13:14.232 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.232 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.492 [2024-11-26 20:26:07.989708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:14.492 [2024-11-26 20:26:07.989838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.492 [2024-11-26 20:26:07.989881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:14.492 [2024-11-26 20:26:07.989927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.492 [2024-11-26 20:26:07.990493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.492 [2024-11-26 20:26:07.990564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:14.492 [2024-11-26 20:26:07.990724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:14.492 [2024-11-26 20:26:07.990800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:14.492 [2024-11-26 20:26:07.990981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:14.492 [2024-11-26 20:26:07.991033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:14.492 [2024-11-26 20:26:07.991389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:14.492 [2024-11-26 20:26:07.991613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:14.492 [2024-11-26 20:26:07.991664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:14.492 [2024-11-26 20:26:07.991904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.492 pt3 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.492 20:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.492 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.492 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.492 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.492 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.492 "name": "raid_bdev1", 00:13:14.492 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:14.492 "strip_size_kb": 0, 00:13:14.492 "state": "online", 00:13:14.492 "raid_level": "raid1", 00:13:14.492 "superblock": true, 00:13:14.492 "num_base_bdevs": 3, 00:13:14.492 "num_base_bdevs_discovered": 2, 00:13:14.492 "num_base_bdevs_operational": 2, 00:13:14.492 "base_bdevs_list": [ 00:13:14.492 { 00:13:14.492 "name": null, 00:13:14.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.492 "is_configured": false, 00:13:14.492 "data_offset": 2048, 00:13:14.492 "data_size": 63488 00:13:14.492 }, 00:13:14.492 { 00:13:14.492 "name": "pt2", 00:13:14.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.492 "is_configured": true, 00:13:14.492 "data_offset": 2048, 00:13:14.492 "data_size": 63488 00:13:14.492 }, 00:13:14.492 { 00:13:14.492 "name": "pt3", 00:13:14.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.492 "is_configured": true, 00:13:14.492 "data_offset": 2048, 00:13:14.492 "data_size": 63488 00:13:14.492 } 00:13:14.492 ] 00:13:14.492 }' 00:13:14.492 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.492 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 [2024-11-26 20:26:08.412994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.061 [2024-11-26 20:26:08.413033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.061 [2024-11-26 20:26:08.413124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.061 [2024-11-26 20:26:08.413195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.061 [2024-11-26 20:26:08.413206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 [2024-11-26 20:26:08.488924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:15.061 [2024-11-26 20:26:08.489114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.061 [2024-11-26 20:26:08.489147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:15.061 [2024-11-26 20:26:08.489161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.061 [2024-11-26 20:26:08.491558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.061 [2024-11-26 20:26:08.491606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:15.061 [2024-11-26 20:26:08.491723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:15.061 [2024-11-26 20:26:08.491778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:15.061 [2024-11-26 20:26:08.491931] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:15.061 [2024-11-26 20:26:08.491949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.061 [2024-11-26 20:26:08.491968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:15.061 [2024-11-26 20:26:08.492025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:15.061 pt1 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.061 "name": "raid_bdev1", 00:13:15.061 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:15.061 "strip_size_kb": 0, 00:13:15.061 "state": "configuring", 00:13:15.061 "raid_level": "raid1", 00:13:15.061 "superblock": true, 00:13:15.061 "num_base_bdevs": 3, 00:13:15.061 "num_base_bdevs_discovered": 1, 00:13:15.061 "num_base_bdevs_operational": 2, 00:13:15.061 "base_bdevs_list": [ 00:13:15.061 { 00:13:15.061 "name": null, 00:13:15.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.061 "is_configured": false, 00:13:15.061 "data_offset": 2048, 00:13:15.061 "data_size": 63488 00:13:15.061 }, 00:13:15.061 { 00:13:15.061 "name": "pt2", 00:13:15.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.061 "is_configured": true, 00:13:15.061 "data_offset": 2048, 00:13:15.061 "data_size": 63488 00:13:15.061 }, 00:13:15.061 { 00:13:15.061 "name": null, 00:13:15.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.061 "is_configured": false, 00:13:15.061 "data_offset": 2048, 00:13:15.061 "data_size": 63488 00:13:15.061 } 00:13:15.061 ] 00:13:15.061 }' 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.061 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.630 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:15.630 20:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:15.630 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.630 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.630 20:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.630 [2024-11-26 20:26:09.016030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:15.630 [2024-11-26 20:26:09.016187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.630 [2024-11-26 20:26:09.016258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:15.630 [2024-11-26 20:26:09.016327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.630 [2024-11-26 20:26:09.017073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.630 [2024-11-26 20:26:09.017158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:15.630 [2024-11-26 20:26:09.017358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:15.630 [2024-11-26 20:26:09.017435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:15.630 [2024-11-26 20:26:09.017629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:15.630 [2024-11-26 20:26:09.017674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.630 [2024-11-26 20:26:09.017984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:15.630 [2024-11-26 20:26:09.018182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:15.630 [2024-11-26 20:26:09.018231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:15.630 [2024-11-26 20:26:09.018457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.630 pt3 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.630 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.630 "name": "raid_bdev1", 00:13:15.630 "uuid": "5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08", 00:13:15.630 "strip_size_kb": 0, 00:13:15.630 "state": "online", 00:13:15.630 "raid_level": "raid1", 00:13:15.630 "superblock": true, 00:13:15.630 "num_base_bdevs": 3, 00:13:15.630 "num_base_bdevs_discovered": 2, 00:13:15.630 "num_base_bdevs_operational": 2, 00:13:15.630 "base_bdevs_list": [ 00:13:15.630 { 00:13:15.630 "name": null, 00:13:15.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.630 "is_configured": false, 00:13:15.630 "data_offset": 2048, 00:13:15.630 "data_size": 63488 00:13:15.630 }, 00:13:15.630 { 00:13:15.630 "name": "pt2", 00:13:15.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.630 "is_configured": true, 00:13:15.630 "data_offset": 2048, 00:13:15.630 "data_size": 63488 00:13:15.630 }, 00:13:15.630 { 00:13:15.630 "name": "pt3", 00:13:15.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.631 "is_configured": true, 00:13:15.631 "data_offset": 2048, 00:13:15.631 "data_size": 63488 00:13:15.631 } 00:13:15.631 ] 00:13:15.631 }' 00:13:15.631 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.631 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:16.207 [2024-11-26 20:26:09.511519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08 '!=' 5aceaf7f-371b-4bbd-b1cd-ecc75a6d4c08 ']' 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68951 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68951 ']' 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68951 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68951 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68951' 00:13:16.207 killing process with pid 68951 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68951 00:13:16.207 [2024-11-26 20:26:09.584199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.207 [2024-11-26 20:26:09.584325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.207 [2024-11-26 20:26:09.584393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.207 [2024-11-26 20:26:09.584405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:16.207 20:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68951 00:13:16.467 [2024-11-26 20:26:09.898096] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.846 20:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:17.846 00:13:17.846 real 0m7.812s 00:13:17.846 user 0m12.156s 00:13:17.846 sys 0m1.353s 00:13:17.846 20:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.846 20:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.846 ************************************ 00:13:17.846 END TEST raid_superblock_test 00:13:17.846 ************************************ 00:13:17.846 20:26:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:17.846 20:26:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:17.846 20:26:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.846 20:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.846 ************************************ 00:13:17.846 START TEST raid_read_error_test 00:13:17.846 ************************************ 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c53WByKvsr 00:13:17.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69397 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69397 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69397 ']' 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.846 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.847 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.847 20:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.847 [2024-11-26 20:26:11.267588] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:17.847 [2024-11-26 20:26:11.267716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69397 ] 00:13:18.106 [2024-11-26 20:26:11.446402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.106 [2024-11-26 20:26:11.582197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.366 [2024-11-26 20:26:11.811168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.366 [2024-11-26 20:26:11.811223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.625 BaseBdev1_malloc 00:13:18.625 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.626 true 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.626 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 [2024-11-26 20:26:12.180778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:18.885 [2024-11-26 20:26:12.180890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.885 [2024-11-26 20:26:12.180919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:18.885 [2024-11-26 20:26:12.180932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.885 [2024-11-26 20:26:12.183313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.885 [2024-11-26 20:26:12.183353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:18.885 BaseBdev1 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 BaseBdev2_malloc 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 true 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 [2024-11-26 20:26:12.248589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:18.885 [2024-11-26 20:26:12.248649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.885 [2024-11-26 20:26:12.248684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:18.885 [2024-11-26 20:26:12.248696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.885 [2024-11-26 20:26:12.251105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.885 [2024-11-26 20:26:12.251149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:18.885 BaseBdev2 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 BaseBdev3_malloc 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:18.885 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.886 true 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.886 [2024-11-26 20:26:12.331419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:18.886 [2024-11-26 20:26:12.331535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.886 [2024-11-26 20:26:12.331559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:18.886 [2024-11-26 20:26:12.331572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.886 [2024-11-26 20:26:12.334039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.886 [2024-11-26 20:26:12.334081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:18.886 BaseBdev3 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.886 [2024-11-26 20:26:12.343489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.886 [2024-11-26 20:26:12.345566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.886 [2024-11-26 20:26:12.345733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.886 [2024-11-26 20:26:12.346017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:18.886 [2024-11-26 20:26:12.346033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.886 [2024-11-26 20:26:12.346361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:18.886 [2024-11-26 20:26:12.346543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:18.886 [2024-11-26 20:26:12.346556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:18.886 [2024-11-26 20:26:12.346724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.886 "name": "raid_bdev1", 00:13:18.886 "uuid": "3dca6946-7de3-417e-802e-acbb28e4898c", 00:13:18.886 "strip_size_kb": 0, 00:13:18.886 "state": "online", 00:13:18.886 "raid_level": "raid1", 00:13:18.886 "superblock": true, 00:13:18.886 "num_base_bdevs": 3, 00:13:18.886 "num_base_bdevs_discovered": 3, 00:13:18.886 "num_base_bdevs_operational": 3, 00:13:18.886 "base_bdevs_list": [ 00:13:18.886 { 00:13:18.886 "name": "BaseBdev1", 00:13:18.886 "uuid": "cb766774-2a01-50ad-8a69-5a45def57431", 00:13:18.886 "is_configured": true, 00:13:18.886 "data_offset": 2048, 00:13:18.886 "data_size": 63488 00:13:18.886 }, 00:13:18.886 { 00:13:18.886 "name": "BaseBdev2", 00:13:18.886 "uuid": "a4ab8c35-3b24-50cf-abd6-b50902446303", 00:13:18.886 "is_configured": true, 00:13:18.886 "data_offset": 2048, 00:13:18.886 "data_size": 63488 00:13:18.886 }, 00:13:18.886 { 00:13:18.886 "name": "BaseBdev3", 00:13:18.886 "uuid": "3c110bd5-469f-5093-9aed-b2b225dc5af7", 00:13:18.886 "is_configured": true, 00:13:18.886 "data_offset": 2048, 00:13:18.886 "data_size": 63488 00:13:18.886 } 00:13:18.886 ] 00:13:18.886 }' 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.886 20:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:19.456 20:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:19.456 [2024-11-26 20:26:12.899941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:20.394 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:20.394 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.394 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.394 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.394 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:20.394 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.395 "name": "raid_bdev1", 00:13:20.395 "uuid": "3dca6946-7de3-417e-802e-acbb28e4898c", 00:13:20.395 "strip_size_kb": 0, 00:13:20.395 "state": "online", 00:13:20.395 "raid_level": "raid1", 00:13:20.395 "superblock": true, 00:13:20.395 "num_base_bdevs": 3, 00:13:20.395 "num_base_bdevs_discovered": 3, 00:13:20.395 "num_base_bdevs_operational": 3, 00:13:20.395 "base_bdevs_list": [ 00:13:20.395 { 00:13:20.395 "name": "BaseBdev1", 00:13:20.395 "uuid": "cb766774-2a01-50ad-8a69-5a45def57431", 00:13:20.395 "is_configured": true, 00:13:20.395 "data_offset": 2048, 00:13:20.395 "data_size": 63488 00:13:20.395 }, 00:13:20.395 { 00:13:20.395 "name": "BaseBdev2", 00:13:20.395 "uuid": "a4ab8c35-3b24-50cf-abd6-b50902446303", 00:13:20.395 "is_configured": true, 00:13:20.395 "data_offset": 2048, 00:13:20.395 "data_size": 63488 00:13:20.395 }, 00:13:20.395 { 00:13:20.395 "name": "BaseBdev3", 00:13:20.395 "uuid": "3c110bd5-469f-5093-9aed-b2b225dc5af7", 00:13:20.395 "is_configured": true, 00:13:20.395 "data_offset": 2048, 00:13:20.395 "data_size": 63488 00:13:20.395 } 00:13:20.395 ] 00:13:20.395 }' 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.395 20:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.965 [2024-11-26 20:26:14.257996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.965 [2024-11-26 20:26:14.258099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.965 [2024-11-26 20:26:14.261617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.965 [2024-11-26 20:26:14.261717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.965 [2024-11-26 20:26:14.261869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.965 [2024-11-26 20:26:14.261923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.965 { 00:13:20.965 "results": [ 00:13:20.965 { 00:13:20.965 "job": "raid_bdev1", 00:13:20.965 "core_mask": "0x1", 00:13:20.965 "workload": "randrw", 00:13:20.965 "percentage": 50, 00:13:20.965 "status": "finished", 00:13:20.965 "queue_depth": 1, 00:13:20.965 "io_size": 131072, 00:13:20.965 "runtime": 1.358925, 00:13:20.965 "iops": 12263.369943153595, 00:13:20.965 "mibps": 1532.9212428941994, 00:13:20.965 "io_failed": 0, 00:13:20.965 "io_timeout": 0, 00:13:20.965 "avg_latency_us": 78.60269639190993, 00:13:20.965 "min_latency_us": 25.2646288209607, 00:13:20.965 "max_latency_us": 1681.3275109170306 00:13:20.965 } 00:13:20.965 ], 00:13:20.965 "core_count": 1 00:13:20.965 } 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69397 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69397 ']' 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69397 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69397 00:13:20.965 killing process with pid 69397 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69397' 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69397 00:13:20.965 [2024-11-26 20:26:14.309579] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.965 20:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69397 00:13:21.224 [2024-11-26 20:26:14.555773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c53WByKvsr 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:22.604 ************************************ 00:13:22.604 END TEST raid_read_error_test 00:13:22.604 ************************************ 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:22.604 00:13:22.604 real 0m4.687s 00:13:22.604 user 0m5.545s 00:13:22.604 sys 0m0.582s 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.604 20:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.604 20:26:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:22.604 20:26:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:22.604 20:26:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.604 20:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.604 ************************************ 00:13:22.604 START TEST raid_write_error_test 00:13:22.604 ************************************ 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wpqlp0Eygd 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69537 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69537 00:13:22.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69537 ']' 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.604 20:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.604 [2024-11-26 20:26:16.023538] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:22.604 [2024-11-26 20:26:16.023754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:13:22.871 [2024-11-26 20:26:16.228021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.871 [2024-11-26 20:26:16.356332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.148 [2024-11-26 20:26:16.575940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.148 [2024-11-26 20:26:16.576084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.408 BaseBdev1_malloc 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.408 true 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.408 [2024-11-26 20:26:16.941375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:23.408 [2024-11-26 20:26:16.941434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.408 [2024-11-26 20:26:16.941454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:23.408 [2024-11-26 20:26:16.941465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.408 [2024-11-26 20:26:16.943631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.408 [2024-11-26 20:26:16.943744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.408 BaseBdev1 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.408 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:23.409 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:23.409 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.409 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.668 BaseBdev2_malloc 00:13:23.668 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.668 20:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:23.668 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.668 20:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.668 true 00:13:23.668 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.668 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:23.668 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.668 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.668 [2024-11-26 20:26:17.010029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:23.669 [2024-11-26 20:26:17.010185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.669 [2024-11-26 20:26:17.010213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:23.669 [2024-11-26 20:26:17.010227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.669 [2024-11-26 20:26:17.012799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.669 [2024-11-26 20:26:17.012844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:23.669 BaseBdev2 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.669 BaseBdev3_malloc 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.669 true 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.669 [2024-11-26 20:26:17.093901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:23.669 [2024-11-26 20:26:17.094016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.669 [2024-11-26 20:26:17.094069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:23.669 [2024-11-26 20:26:17.094105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.669 [2024-11-26 20:26:17.096520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.669 [2024-11-26 20:26:17.096613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:23.669 BaseBdev3 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.669 [2024-11-26 20:26:17.105956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.669 [2024-11-26 20:26:17.107949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.669 [2024-11-26 20:26:17.108065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.669 [2024-11-26 20:26:17.108335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:23.669 [2024-11-26 20:26:17.108390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.669 [2024-11-26 20:26:17.108703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:23.669 [2024-11-26 20:26:17.108944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:23.669 [2024-11-26 20:26:17.108993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:23.669 [2024-11-26 20:26:17.109221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.669 "name": "raid_bdev1", 00:13:23.669 "uuid": "9d5636b3-6fe9-4a6b-80f5-0670b459fe53", 00:13:23.669 "strip_size_kb": 0, 00:13:23.669 "state": "online", 00:13:23.669 "raid_level": "raid1", 00:13:23.669 "superblock": true, 00:13:23.669 "num_base_bdevs": 3, 00:13:23.669 "num_base_bdevs_discovered": 3, 00:13:23.669 "num_base_bdevs_operational": 3, 00:13:23.669 "base_bdevs_list": [ 00:13:23.669 { 00:13:23.669 "name": "BaseBdev1", 00:13:23.669 "uuid": "75381a99-5abf-58ae-a50a-725dc568c180", 00:13:23.669 "is_configured": true, 00:13:23.669 "data_offset": 2048, 00:13:23.669 "data_size": 63488 00:13:23.669 }, 00:13:23.669 { 00:13:23.669 "name": "BaseBdev2", 00:13:23.669 "uuid": "49c45d8d-e2cf-518a-a8b0-905c4aa4e7bb", 00:13:23.669 "is_configured": true, 00:13:23.669 "data_offset": 2048, 00:13:23.669 "data_size": 63488 00:13:23.669 }, 00:13:23.669 { 00:13:23.669 "name": "BaseBdev3", 00:13:23.669 "uuid": "7099750e-740c-5b28-9ba6-28a80c77e530", 00:13:23.669 "is_configured": true, 00:13:23.669 "data_offset": 2048, 00:13:23.669 "data_size": 63488 00:13:23.669 } 00:13:23.669 ] 00:13:23.669 }' 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.669 20:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.237 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:24.237 20:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:24.237 [2024-11-26 20:26:17.642412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.177 [2024-11-26 20:26:18.555167] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:25.177 [2024-11-26 20:26:18.555233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.177 [2024-11-26 20:26:18.555478] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.177 "name": "raid_bdev1", 00:13:25.177 "uuid": "9d5636b3-6fe9-4a6b-80f5-0670b459fe53", 00:13:25.177 "strip_size_kb": 0, 00:13:25.177 "state": "online", 00:13:25.177 "raid_level": "raid1", 00:13:25.177 "superblock": true, 00:13:25.177 "num_base_bdevs": 3, 00:13:25.177 "num_base_bdevs_discovered": 2, 00:13:25.177 "num_base_bdevs_operational": 2, 00:13:25.177 "base_bdevs_list": [ 00:13:25.177 { 00:13:25.177 "name": null, 00:13:25.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.177 "is_configured": false, 00:13:25.177 "data_offset": 0, 00:13:25.177 "data_size": 63488 00:13:25.177 }, 00:13:25.177 { 00:13:25.177 "name": "BaseBdev2", 00:13:25.177 "uuid": "49c45d8d-e2cf-518a-a8b0-905c4aa4e7bb", 00:13:25.177 "is_configured": true, 00:13:25.177 "data_offset": 2048, 00:13:25.177 "data_size": 63488 00:13:25.177 }, 00:13:25.177 { 00:13:25.177 "name": "BaseBdev3", 00:13:25.177 "uuid": "7099750e-740c-5b28-9ba6-28a80c77e530", 00:13:25.177 "is_configured": true, 00:13:25.177 "data_offset": 2048, 00:13:25.177 "data_size": 63488 00:13:25.177 } 00:13:25.177 ] 00:13:25.177 }' 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.177 20:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.746 [2024-11-26 20:26:19.019425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.746 [2024-11-26 20:26:19.019467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.746 [2024-11-26 20:26:19.022421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.746 [2024-11-26 20:26:19.022491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.746 [2024-11-26 20:26:19.022573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.746 [2024-11-26 20:26:19.022604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:25.746 { 00:13:25.746 "results": [ 00:13:25.746 { 00:13:25.746 "job": "raid_bdev1", 00:13:25.746 "core_mask": "0x1", 00:13:25.746 "workload": "randrw", 00:13:25.746 "percentage": 50, 00:13:25.746 "status": "finished", 00:13:25.746 "queue_depth": 1, 00:13:25.746 "io_size": 131072, 00:13:25.746 "runtime": 1.377525, 00:13:25.746 "iops": 13280.33973975064, 00:13:25.746 "mibps": 1660.04246746883, 00:13:25.746 "io_failed": 0, 00:13:25.746 "io_timeout": 0, 00:13:25.746 "avg_latency_us": 72.29372705776538, 00:13:25.746 "min_latency_us": 25.823580786026202, 00:13:25.746 "max_latency_us": 1595.4724890829693 00:13:25.746 } 00:13:25.746 ], 00:13:25.746 "core_count": 1 00:13:25.746 } 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69537 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69537 ']' 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69537 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69537 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.746 killing process with pid 69537 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69537' 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69537 00:13:25.746 [2024-11-26 20:26:19.057464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.746 20:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69537 00:13:26.005 [2024-11-26 20:26:19.316276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wpqlp0Eygd 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:27.383 ************************************ 00:13:27.383 END TEST raid_write_error_test 00:13:27.383 ************************************ 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:27.383 00:13:27.383 real 0m4.683s 00:13:27.383 user 0m5.535s 00:13:27.383 sys 0m0.578s 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.383 20:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.383 20:26:20 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:27.383 20:26:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:27.383 20:26:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:27.383 20:26:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:27.383 20:26:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.383 20:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.383 ************************************ 00:13:27.383 START TEST raid_state_function_test 00:13:27.383 ************************************ 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.383 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69687 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:27.384 Process raid pid: 69687 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69687' 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69687 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69687 ']' 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.384 20:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.384 [2024-11-26 20:26:20.760859] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:27.384 [2024-11-26 20:26:20.761001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.643 [2024-11-26 20:26:20.937731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.643 [2024-11-26 20:26:21.060944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.901 [2024-11-26 20:26:21.281612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.901 [2024-11-26 20:26:21.281663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.159 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.159 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:28.159 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.159 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.159 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.159 [2024-11-26 20:26:21.647903] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.159 [2024-11-26 20:26:21.647965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.159 [2024-11-26 20:26:21.647976] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.159 [2024-11-26 20:26:21.647986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.160 [2024-11-26 20:26:21.647992] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.160 [2024-11-26 20:26:21.648001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.160 [2024-11-26 20:26:21.648008] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.160 [2024-11-26 20:26:21.648017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.160 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.418 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.418 "name": "Existed_Raid", 00:13:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.418 "strip_size_kb": 64, 00:13:28.418 "state": "configuring", 00:13:28.418 "raid_level": "raid0", 00:13:28.418 "superblock": false, 00:13:28.418 "num_base_bdevs": 4, 00:13:28.418 "num_base_bdevs_discovered": 0, 00:13:28.418 "num_base_bdevs_operational": 4, 00:13:28.418 "base_bdevs_list": [ 00:13:28.418 { 00:13:28.418 "name": "BaseBdev1", 00:13:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.418 "is_configured": false, 00:13:28.418 "data_offset": 0, 00:13:28.418 "data_size": 0 00:13:28.418 }, 00:13:28.418 { 00:13:28.418 "name": "BaseBdev2", 00:13:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.418 "is_configured": false, 00:13:28.418 "data_offset": 0, 00:13:28.418 "data_size": 0 00:13:28.418 }, 00:13:28.418 { 00:13:28.418 "name": "BaseBdev3", 00:13:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.418 "is_configured": false, 00:13:28.418 "data_offset": 0, 00:13:28.418 "data_size": 0 00:13:28.418 }, 00:13:28.418 { 00:13:28.418 "name": "BaseBdev4", 00:13:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.418 "is_configured": false, 00:13:28.418 "data_offset": 0, 00:13:28.418 "data_size": 0 00:13:28.418 } 00:13:28.418 ] 00:13:28.418 }' 00:13:28.418 20:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.418 20:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 [2024-11-26 20:26:22.131012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.676 [2024-11-26 20:26:22.131064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 [2024-11-26 20:26:22.143000] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.676 [2024-11-26 20:26:22.143046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.676 [2024-11-26 20:26:22.143057] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.676 [2024-11-26 20:26:22.143068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.676 [2024-11-26 20:26:22.143075] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.676 [2024-11-26 20:26:22.143085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.676 [2024-11-26 20:26:22.143092] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.676 [2024-11-26 20:26:22.143102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 [2024-11-26 20:26:22.195334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.676 BaseBdev1 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.676 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 [ 00:13:28.676 { 00:13:28.676 "name": "BaseBdev1", 00:13:28.676 "aliases": [ 00:13:28.676 "d50a3f7e-f8df-4b48-b956-0941a9024a54" 00:13:28.676 ], 00:13:28.676 "product_name": "Malloc disk", 00:13:28.676 "block_size": 512, 00:13:28.676 "num_blocks": 65536, 00:13:28.676 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:28.676 "assigned_rate_limits": { 00:13:28.676 "rw_ios_per_sec": 0, 00:13:28.676 "rw_mbytes_per_sec": 0, 00:13:28.676 "r_mbytes_per_sec": 0, 00:13:28.676 "w_mbytes_per_sec": 0 00:13:28.676 }, 00:13:28.676 "claimed": true, 00:13:28.676 "claim_type": "exclusive_write", 00:13:28.676 "zoned": false, 00:13:28.676 "supported_io_types": { 00:13:28.676 "read": true, 00:13:28.676 "write": true, 00:13:28.676 "unmap": true, 00:13:28.676 "flush": true, 00:13:28.676 "reset": true, 00:13:28.676 "nvme_admin": false, 00:13:28.676 "nvme_io": false, 00:13:28.676 "nvme_io_md": false, 00:13:28.676 "write_zeroes": true, 00:13:28.676 "zcopy": true, 00:13:28.676 "get_zone_info": false, 00:13:28.676 "zone_management": false, 00:13:28.676 "zone_append": false, 00:13:28.676 "compare": false, 00:13:28.676 "compare_and_write": false, 00:13:28.676 "abort": true, 00:13:28.676 "seek_hole": false, 00:13:28.676 "seek_data": false, 00:13:28.676 "copy": true, 00:13:28.676 "nvme_iov_md": false 00:13:28.676 }, 00:13:28.676 "memory_domains": [ 00:13:28.676 { 00:13:28.676 "dma_device_id": "system", 00:13:28.676 "dma_device_type": 1 00:13:28.676 }, 00:13:28.676 { 00:13:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.935 "dma_device_type": 2 00:13:28.935 } 00:13:28.935 ], 00:13:28.935 "driver_specific": {} 00:13:28.935 } 00:13:28.935 ] 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.935 "name": "Existed_Raid", 00:13:28.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.935 "strip_size_kb": 64, 00:13:28.935 "state": "configuring", 00:13:28.935 "raid_level": "raid0", 00:13:28.935 "superblock": false, 00:13:28.935 "num_base_bdevs": 4, 00:13:28.935 "num_base_bdevs_discovered": 1, 00:13:28.935 "num_base_bdevs_operational": 4, 00:13:28.935 "base_bdevs_list": [ 00:13:28.935 { 00:13:28.935 "name": "BaseBdev1", 00:13:28.935 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:28.935 "is_configured": true, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 65536 00:13:28.935 }, 00:13:28.935 { 00:13:28.935 "name": "BaseBdev2", 00:13:28.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.935 "is_configured": false, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 0 00:13:28.935 }, 00:13:28.935 { 00:13:28.935 "name": "BaseBdev3", 00:13:28.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.935 "is_configured": false, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 0 00:13:28.935 }, 00:13:28.935 { 00:13:28.935 "name": "BaseBdev4", 00:13:28.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.935 "is_configured": false, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 0 00:13:28.935 } 00:13:28.935 ] 00:13:28.935 }' 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.935 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.193 [2024-11-26 20:26:22.710527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.193 [2024-11-26 20:26:22.710593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.193 [2024-11-26 20:26:22.722558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.193 [2024-11-26 20:26:22.724663] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.193 [2024-11-26 20:26:22.724709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.193 [2024-11-26 20:26:22.724720] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.193 [2024-11-26 20:26:22.724733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.193 [2024-11-26 20:26:22.724740] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:29.193 [2024-11-26 20:26:22.724750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.193 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.451 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.451 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.451 "name": "Existed_Raid", 00:13:29.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.451 "strip_size_kb": 64, 00:13:29.451 "state": "configuring", 00:13:29.451 "raid_level": "raid0", 00:13:29.451 "superblock": false, 00:13:29.451 "num_base_bdevs": 4, 00:13:29.451 "num_base_bdevs_discovered": 1, 00:13:29.451 "num_base_bdevs_operational": 4, 00:13:29.451 "base_bdevs_list": [ 00:13:29.451 { 00:13:29.451 "name": "BaseBdev1", 00:13:29.451 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:29.451 "is_configured": true, 00:13:29.451 "data_offset": 0, 00:13:29.451 "data_size": 65536 00:13:29.451 }, 00:13:29.451 { 00:13:29.451 "name": "BaseBdev2", 00:13:29.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.451 "is_configured": false, 00:13:29.451 "data_offset": 0, 00:13:29.451 "data_size": 0 00:13:29.451 }, 00:13:29.451 { 00:13:29.451 "name": "BaseBdev3", 00:13:29.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.451 "is_configured": false, 00:13:29.451 "data_offset": 0, 00:13:29.451 "data_size": 0 00:13:29.451 }, 00:13:29.451 { 00:13:29.451 "name": "BaseBdev4", 00:13:29.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.451 "is_configured": false, 00:13:29.451 "data_offset": 0, 00:13:29.452 "data_size": 0 00:13:29.452 } 00:13:29.452 ] 00:13:29.452 }' 00:13:29.452 20:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.452 20:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.710 [2024-11-26 20:26:23.215715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.710 BaseBdev2 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.710 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.710 [ 00:13:29.710 { 00:13:29.710 "name": "BaseBdev2", 00:13:29.710 "aliases": [ 00:13:29.710 "93112113-49a6-4693-a9ef-32542187dfe8" 00:13:29.710 ], 00:13:29.710 "product_name": "Malloc disk", 00:13:29.710 "block_size": 512, 00:13:29.710 "num_blocks": 65536, 00:13:29.710 "uuid": "93112113-49a6-4693-a9ef-32542187dfe8", 00:13:29.710 "assigned_rate_limits": { 00:13:29.710 "rw_ios_per_sec": 0, 00:13:29.710 "rw_mbytes_per_sec": 0, 00:13:29.710 "r_mbytes_per_sec": 0, 00:13:29.710 "w_mbytes_per_sec": 0 00:13:29.710 }, 00:13:29.710 "claimed": true, 00:13:29.710 "claim_type": "exclusive_write", 00:13:29.710 "zoned": false, 00:13:29.710 "supported_io_types": { 00:13:29.710 "read": true, 00:13:29.710 "write": true, 00:13:29.710 "unmap": true, 00:13:29.710 "flush": true, 00:13:29.710 "reset": true, 00:13:29.710 "nvme_admin": false, 00:13:29.710 "nvme_io": false, 00:13:29.710 "nvme_io_md": false, 00:13:29.710 "write_zeroes": true, 00:13:29.710 "zcopy": true, 00:13:29.710 "get_zone_info": false, 00:13:29.710 "zone_management": false, 00:13:29.710 "zone_append": false, 00:13:29.710 "compare": false, 00:13:29.710 "compare_and_write": false, 00:13:29.710 "abort": true, 00:13:29.710 "seek_hole": false, 00:13:29.710 "seek_data": false, 00:13:29.710 "copy": true, 00:13:29.710 "nvme_iov_md": false 00:13:29.710 }, 00:13:29.710 "memory_domains": [ 00:13:29.710 { 00:13:29.710 "dma_device_id": "system", 00:13:29.710 "dma_device_type": 1 00:13:29.710 }, 00:13:29.710 { 00:13:29.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.711 "dma_device_type": 2 00:13:29.711 } 00:13:29.711 ], 00:13:29.711 "driver_specific": {} 00:13:29.711 } 00:13:29.711 ] 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.711 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.969 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.969 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.969 "name": "Existed_Raid", 00:13:29.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.969 "strip_size_kb": 64, 00:13:29.969 "state": "configuring", 00:13:29.969 "raid_level": "raid0", 00:13:29.969 "superblock": false, 00:13:29.969 "num_base_bdevs": 4, 00:13:29.969 "num_base_bdevs_discovered": 2, 00:13:29.969 "num_base_bdevs_operational": 4, 00:13:29.969 "base_bdevs_list": [ 00:13:29.969 { 00:13:29.969 "name": "BaseBdev1", 00:13:29.969 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:29.969 "is_configured": true, 00:13:29.969 "data_offset": 0, 00:13:29.969 "data_size": 65536 00:13:29.969 }, 00:13:29.969 { 00:13:29.969 "name": "BaseBdev2", 00:13:29.969 "uuid": "93112113-49a6-4693-a9ef-32542187dfe8", 00:13:29.969 "is_configured": true, 00:13:29.969 "data_offset": 0, 00:13:29.969 "data_size": 65536 00:13:29.969 }, 00:13:29.969 { 00:13:29.969 "name": "BaseBdev3", 00:13:29.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.969 "is_configured": false, 00:13:29.969 "data_offset": 0, 00:13:29.969 "data_size": 0 00:13:29.969 }, 00:13:29.969 { 00:13:29.969 "name": "BaseBdev4", 00:13:29.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.969 "is_configured": false, 00:13:29.969 "data_offset": 0, 00:13:29.969 "data_size": 0 00:13:29.969 } 00:13:29.969 ] 00:13:29.969 }' 00:13:29.969 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.969 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 [2024-11-26 20:26:23.758031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.228 BaseBdev3 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.228 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.486 [ 00:13:30.486 { 00:13:30.486 "name": "BaseBdev3", 00:13:30.486 "aliases": [ 00:13:30.486 "5651f716-edf2-40a2-8c9e-82fe5180c149" 00:13:30.486 ], 00:13:30.486 "product_name": "Malloc disk", 00:13:30.486 "block_size": 512, 00:13:30.486 "num_blocks": 65536, 00:13:30.486 "uuid": "5651f716-edf2-40a2-8c9e-82fe5180c149", 00:13:30.486 "assigned_rate_limits": { 00:13:30.486 "rw_ios_per_sec": 0, 00:13:30.486 "rw_mbytes_per_sec": 0, 00:13:30.486 "r_mbytes_per_sec": 0, 00:13:30.486 "w_mbytes_per_sec": 0 00:13:30.486 }, 00:13:30.486 "claimed": true, 00:13:30.486 "claim_type": "exclusive_write", 00:13:30.486 "zoned": false, 00:13:30.486 "supported_io_types": { 00:13:30.486 "read": true, 00:13:30.486 "write": true, 00:13:30.486 "unmap": true, 00:13:30.486 "flush": true, 00:13:30.486 "reset": true, 00:13:30.486 "nvme_admin": false, 00:13:30.486 "nvme_io": false, 00:13:30.486 "nvme_io_md": false, 00:13:30.486 "write_zeroes": true, 00:13:30.486 "zcopy": true, 00:13:30.486 "get_zone_info": false, 00:13:30.486 "zone_management": false, 00:13:30.486 "zone_append": false, 00:13:30.486 "compare": false, 00:13:30.486 "compare_and_write": false, 00:13:30.486 "abort": true, 00:13:30.486 "seek_hole": false, 00:13:30.486 "seek_data": false, 00:13:30.486 "copy": true, 00:13:30.486 "nvme_iov_md": false 00:13:30.486 }, 00:13:30.486 "memory_domains": [ 00:13:30.486 { 00:13:30.486 "dma_device_id": "system", 00:13:30.486 "dma_device_type": 1 00:13:30.486 }, 00:13:30.486 { 00:13:30.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.486 "dma_device_type": 2 00:13:30.486 } 00:13:30.486 ], 00:13:30.486 "driver_specific": {} 00:13:30.486 } 00:13:30.486 ] 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.486 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.486 "name": "Existed_Raid", 00:13:30.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.486 "strip_size_kb": 64, 00:13:30.486 "state": "configuring", 00:13:30.486 "raid_level": "raid0", 00:13:30.487 "superblock": false, 00:13:30.487 "num_base_bdevs": 4, 00:13:30.487 "num_base_bdevs_discovered": 3, 00:13:30.487 "num_base_bdevs_operational": 4, 00:13:30.487 "base_bdevs_list": [ 00:13:30.487 { 00:13:30.487 "name": "BaseBdev1", 00:13:30.487 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:30.487 "is_configured": true, 00:13:30.487 "data_offset": 0, 00:13:30.487 "data_size": 65536 00:13:30.487 }, 00:13:30.487 { 00:13:30.487 "name": "BaseBdev2", 00:13:30.487 "uuid": "93112113-49a6-4693-a9ef-32542187dfe8", 00:13:30.487 "is_configured": true, 00:13:30.487 "data_offset": 0, 00:13:30.487 "data_size": 65536 00:13:30.487 }, 00:13:30.487 { 00:13:30.487 "name": "BaseBdev3", 00:13:30.487 "uuid": "5651f716-edf2-40a2-8c9e-82fe5180c149", 00:13:30.487 "is_configured": true, 00:13:30.487 "data_offset": 0, 00:13:30.487 "data_size": 65536 00:13:30.487 }, 00:13:30.487 { 00:13:30.487 "name": "BaseBdev4", 00:13:30.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.487 "is_configured": false, 00:13:30.487 "data_offset": 0, 00:13:30.487 "data_size": 0 00:13:30.487 } 00:13:30.487 ] 00:13:30.487 }' 00:13:30.487 20:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.487 20:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.745 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:30.746 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.746 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.746 [2024-11-26 20:26:24.296078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.746 [2024-11-26 20:26:24.296137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:30.746 [2024-11-26 20:26:24.296148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:30.746 [2024-11-26 20:26:24.296470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:30.746 [2024-11-26 20:26:24.296694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:30.746 [2024-11-26 20:26:24.296717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:30.746 [2024-11-26 20:26:24.297024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.746 BaseBdev4 00:13:30.746 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.746 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:30.746 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:30.746 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.005 [ 00:13:31.005 { 00:13:31.005 "name": "BaseBdev4", 00:13:31.005 "aliases": [ 00:13:31.005 "876845c3-7f3c-4534-9efe-642f76b18f47" 00:13:31.005 ], 00:13:31.005 "product_name": "Malloc disk", 00:13:31.005 "block_size": 512, 00:13:31.005 "num_blocks": 65536, 00:13:31.005 "uuid": "876845c3-7f3c-4534-9efe-642f76b18f47", 00:13:31.005 "assigned_rate_limits": { 00:13:31.005 "rw_ios_per_sec": 0, 00:13:31.005 "rw_mbytes_per_sec": 0, 00:13:31.005 "r_mbytes_per_sec": 0, 00:13:31.005 "w_mbytes_per_sec": 0 00:13:31.005 }, 00:13:31.005 "claimed": true, 00:13:31.005 "claim_type": "exclusive_write", 00:13:31.005 "zoned": false, 00:13:31.005 "supported_io_types": { 00:13:31.005 "read": true, 00:13:31.005 "write": true, 00:13:31.005 "unmap": true, 00:13:31.005 "flush": true, 00:13:31.005 "reset": true, 00:13:31.005 "nvme_admin": false, 00:13:31.005 "nvme_io": false, 00:13:31.005 "nvme_io_md": false, 00:13:31.005 "write_zeroes": true, 00:13:31.005 "zcopy": true, 00:13:31.005 "get_zone_info": false, 00:13:31.005 "zone_management": false, 00:13:31.005 "zone_append": false, 00:13:31.005 "compare": false, 00:13:31.005 "compare_and_write": false, 00:13:31.005 "abort": true, 00:13:31.005 "seek_hole": false, 00:13:31.005 "seek_data": false, 00:13:31.005 "copy": true, 00:13:31.005 "nvme_iov_md": false 00:13:31.005 }, 00:13:31.005 "memory_domains": [ 00:13:31.005 { 00:13:31.005 "dma_device_id": "system", 00:13:31.005 "dma_device_type": 1 00:13:31.005 }, 00:13:31.005 { 00:13:31.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.005 "dma_device_type": 2 00:13:31.005 } 00:13:31.005 ], 00:13:31.005 "driver_specific": {} 00:13:31.005 } 00:13:31.005 ] 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.005 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.005 "name": "Existed_Raid", 00:13:31.005 "uuid": "508fc405-546e-49ed-a695-16bd7ba2cc70", 00:13:31.005 "strip_size_kb": 64, 00:13:31.005 "state": "online", 00:13:31.005 "raid_level": "raid0", 00:13:31.005 "superblock": false, 00:13:31.005 "num_base_bdevs": 4, 00:13:31.005 "num_base_bdevs_discovered": 4, 00:13:31.006 "num_base_bdevs_operational": 4, 00:13:31.006 "base_bdevs_list": [ 00:13:31.006 { 00:13:31.006 "name": "BaseBdev1", 00:13:31.006 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:31.006 "is_configured": true, 00:13:31.006 "data_offset": 0, 00:13:31.006 "data_size": 65536 00:13:31.006 }, 00:13:31.006 { 00:13:31.006 "name": "BaseBdev2", 00:13:31.006 "uuid": "93112113-49a6-4693-a9ef-32542187dfe8", 00:13:31.006 "is_configured": true, 00:13:31.006 "data_offset": 0, 00:13:31.006 "data_size": 65536 00:13:31.006 }, 00:13:31.006 { 00:13:31.006 "name": "BaseBdev3", 00:13:31.006 "uuid": "5651f716-edf2-40a2-8c9e-82fe5180c149", 00:13:31.006 "is_configured": true, 00:13:31.006 "data_offset": 0, 00:13:31.006 "data_size": 65536 00:13:31.006 }, 00:13:31.006 { 00:13:31.006 "name": "BaseBdev4", 00:13:31.006 "uuid": "876845c3-7f3c-4534-9efe-642f76b18f47", 00:13:31.006 "is_configured": true, 00:13:31.006 "data_offset": 0, 00:13:31.006 "data_size": 65536 00:13:31.006 } 00:13:31.006 ] 00:13:31.006 }' 00:13:31.006 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.006 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.265 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.265 [2024-11-26 20:26:24.807735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.526 "name": "Existed_Raid", 00:13:31.526 "aliases": [ 00:13:31.526 "508fc405-546e-49ed-a695-16bd7ba2cc70" 00:13:31.526 ], 00:13:31.526 "product_name": "Raid Volume", 00:13:31.526 "block_size": 512, 00:13:31.526 "num_blocks": 262144, 00:13:31.526 "uuid": "508fc405-546e-49ed-a695-16bd7ba2cc70", 00:13:31.526 "assigned_rate_limits": { 00:13:31.526 "rw_ios_per_sec": 0, 00:13:31.526 "rw_mbytes_per_sec": 0, 00:13:31.526 "r_mbytes_per_sec": 0, 00:13:31.526 "w_mbytes_per_sec": 0 00:13:31.526 }, 00:13:31.526 "claimed": false, 00:13:31.526 "zoned": false, 00:13:31.526 "supported_io_types": { 00:13:31.526 "read": true, 00:13:31.526 "write": true, 00:13:31.526 "unmap": true, 00:13:31.526 "flush": true, 00:13:31.526 "reset": true, 00:13:31.526 "nvme_admin": false, 00:13:31.526 "nvme_io": false, 00:13:31.526 "nvme_io_md": false, 00:13:31.526 "write_zeroes": true, 00:13:31.526 "zcopy": false, 00:13:31.526 "get_zone_info": false, 00:13:31.526 "zone_management": false, 00:13:31.526 "zone_append": false, 00:13:31.526 "compare": false, 00:13:31.526 "compare_and_write": false, 00:13:31.526 "abort": false, 00:13:31.526 "seek_hole": false, 00:13:31.526 "seek_data": false, 00:13:31.526 "copy": false, 00:13:31.526 "nvme_iov_md": false 00:13:31.526 }, 00:13:31.526 "memory_domains": [ 00:13:31.526 { 00:13:31.526 "dma_device_id": "system", 00:13:31.526 "dma_device_type": 1 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.526 "dma_device_type": 2 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "system", 00:13:31.526 "dma_device_type": 1 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.526 "dma_device_type": 2 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "system", 00:13:31.526 "dma_device_type": 1 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.526 "dma_device_type": 2 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "system", 00:13:31.526 "dma_device_type": 1 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.526 "dma_device_type": 2 00:13:31.526 } 00:13:31.526 ], 00:13:31.526 "driver_specific": { 00:13:31.526 "raid": { 00:13:31.526 "uuid": "508fc405-546e-49ed-a695-16bd7ba2cc70", 00:13:31.526 "strip_size_kb": 64, 00:13:31.526 "state": "online", 00:13:31.526 "raid_level": "raid0", 00:13:31.526 "superblock": false, 00:13:31.526 "num_base_bdevs": 4, 00:13:31.526 "num_base_bdevs_discovered": 4, 00:13:31.526 "num_base_bdevs_operational": 4, 00:13:31.526 "base_bdevs_list": [ 00:13:31.526 { 00:13:31.526 "name": "BaseBdev1", 00:13:31.526 "uuid": "d50a3f7e-f8df-4b48-b956-0941a9024a54", 00:13:31.526 "is_configured": true, 00:13:31.526 "data_offset": 0, 00:13:31.526 "data_size": 65536 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "name": "BaseBdev2", 00:13:31.526 "uuid": "93112113-49a6-4693-a9ef-32542187dfe8", 00:13:31.526 "is_configured": true, 00:13:31.526 "data_offset": 0, 00:13:31.526 "data_size": 65536 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "name": "BaseBdev3", 00:13:31.526 "uuid": "5651f716-edf2-40a2-8c9e-82fe5180c149", 00:13:31.526 "is_configured": true, 00:13:31.526 "data_offset": 0, 00:13:31.526 "data_size": 65536 00:13:31.526 }, 00:13:31.526 { 00:13:31.526 "name": "BaseBdev4", 00:13:31.526 "uuid": "876845c3-7f3c-4534-9efe-642f76b18f47", 00:13:31.526 "is_configured": true, 00:13:31.526 "data_offset": 0, 00:13:31.526 "data_size": 65536 00:13:31.526 } 00:13:31.526 ] 00:13:31.526 } 00:13:31.526 } 00:13:31.526 }' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:31.526 BaseBdev2 00:13:31.526 BaseBdev3 00:13:31.526 BaseBdev4' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.526 20:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.526 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.786 [2024-11-26 20:26:25.134847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.786 [2024-11-26 20:26:25.134887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.786 [2024-11-26 20:26:25.134949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.786 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.787 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.787 "name": "Existed_Raid", 00:13:31.787 "uuid": "508fc405-546e-49ed-a695-16bd7ba2cc70", 00:13:31.787 "strip_size_kb": 64, 00:13:31.787 "state": "offline", 00:13:31.787 "raid_level": "raid0", 00:13:31.787 "superblock": false, 00:13:31.787 "num_base_bdevs": 4, 00:13:31.787 "num_base_bdevs_discovered": 3, 00:13:31.787 "num_base_bdevs_operational": 3, 00:13:31.787 "base_bdevs_list": [ 00:13:31.787 { 00:13:31.787 "name": null, 00:13:31.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.787 "is_configured": false, 00:13:31.787 "data_offset": 0, 00:13:31.787 "data_size": 65536 00:13:31.787 }, 00:13:31.787 { 00:13:31.787 "name": "BaseBdev2", 00:13:31.787 "uuid": "93112113-49a6-4693-a9ef-32542187dfe8", 00:13:31.787 "is_configured": true, 00:13:31.787 "data_offset": 0, 00:13:31.787 "data_size": 65536 00:13:31.787 }, 00:13:31.787 { 00:13:31.787 "name": "BaseBdev3", 00:13:31.787 "uuid": "5651f716-edf2-40a2-8c9e-82fe5180c149", 00:13:31.787 "is_configured": true, 00:13:31.787 "data_offset": 0, 00:13:31.787 "data_size": 65536 00:13:31.787 }, 00:13:31.787 { 00:13:31.787 "name": "BaseBdev4", 00:13:31.787 "uuid": "876845c3-7f3c-4534-9efe-642f76b18f47", 00:13:31.787 "is_configured": true, 00:13:31.787 "data_offset": 0, 00:13:31.787 "data_size": 65536 00:13:31.787 } 00:13:31.787 ] 00:13:31.787 }' 00:13:31.787 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.787 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.353 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.354 [2024-11-26 20:26:25.764276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.354 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.612 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.612 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.612 20:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:32.612 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.612 20:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.612 [2024-11-26 20:26:25.927531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.612 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.613 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:32.613 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.613 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.613 [2024-11-26 20:26:26.101211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:32.613 [2024-11-26 20:26:26.101287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.871 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 BaseBdev2 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 [ 00:13:32.872 { 00:13:32.872 "name": "BaseBdev2", 00:13:32.872 "aliases": [ 00:13:32.872 "719d86e1-e8f0-449d-bf3c-b95a64f3080c" 00:13:32.872 ], 00:13:32.872 "product_name": "Malloc disk", 00:13:32.872 "block_size": 512, 00:13:32.872 "num_blocks": 65536, 00:13:32.872 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:32.872 "assigned_rate_limits": { 00:13:32.872 "rw_ios_per_sec": 0, 00:13:32.872 "rw_mbytes_per_sec": 0, 00:13:32.872 "r_mbytes_per_sec": 0, 00:13:32.872 "w_mbytes_per_sec": 0 00:13:32.872 }, 00:13:32.872 "claimed": false, 00:13:32.872 "zoned": false, 00:13:32.872 "supported_io_types": { 00:13:32.872 "read": true, 00:13:32.872 "write": true, 00:13:32.872 "unmap": true, 00:13:32.872 "flush": true, 00:13:32.872 "reset": true, 00:13:32.872 "nvme_admin": false, 00:13:32.872 "nvme_io": false, 00:13:32.872 "nvme_io_md": false, 00:13:32.872 "write_zeroes": true, 00:13:32.872 "zcopy": true, 00:13:32.872 "get_zone_info": false, 00:13:32.872 "zone_management": false, 00:13:32.872 "zone_append": false, 00:13:32.872 "compare": false, 00:13:32.872 "compare_and_write": false, 00:13:32.872 "abort": true, 00:13:32.872 "seek_hole": false, 00:13:32.872 "seek_data": false, 00:13:32.872 "copy": true, 00:13:32.872 "nvme_iov_md": false 00:13:32.872 }, 00:13:32.872 "memory_domains": [ 00:13:32.872 { 00:13:32.872 "dma_device_id": "system", 00:13:32.872 "dma_device_type": 1 00:13:32.872 }, 00:13:32.872 { 00:13:32.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.872 "dma_device_type": 2 00:13:32.872 } 00:13:32.872 ], 00:13:32.872 "driver_specific": {} 00:13:32.872 } 00:13:32.872 ] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 BaseBdev3 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 [ 00:13:33.132 { 00:13:33.132 "name": "BaseBdev3", 00:13:33.132 "aliases": [ 00:13:33.132 "f5806705-a58a-4fce-8c53-a704bad28b1d" 00:13:33.132 ], 00:13:33.132 "product_name": "Malloc disk", 00:13:33.132 "block_size": 512, 00:13:33.132 "num_blocks": 65536, 00:13:33.132 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:33.132 "assigned_rate_limits": { 00:13:33.132 "rw_ios_per_sec": 0, 00:13:33.132 "rw_mbytes_per_sec": 0, 00:13:33.132 "r_mbytes_per_sec": 0, 00:13:33.132 "w_mbytes_per_sec": 0 00:13:33.132 }, 00:13:33.132 "claimed": false, 00:13:33.132 "zoned": false, 00:13:33.132 "supported_io_types": { 00:13:33.132 "read": true, 00:13:33.132 "write": true, 00:13:33.132 "unmap": true, 00:13:33.132 "flush": true, 00:13:33.132 "reset": true, 00:13:33.132 "nvme_admin": false, 00:13:33.132 "nvme_io": false, 00:13:33.132 "nvme_io_md": false, 00:13:33.132 "write_zeroes": true, 00:13:33.132 "zcopy": true, 00:13:33.132 "get_zone_info": false, 00:13:33.132 "zone_management": false, 00:13:33.132 "zone_append": false, 00:13:33.132 "compare": false, 00:13:33.132 "compare_and_write": false, 00:13:33.132 "abort": true, 00:13:33.132 "seek_hole": false, 00:13:33.132 "seek_data": false, 00:13:33.132 "copy": true, 00:13:33.132 "nvme_iov_md": false 00:13:33.132 }, 00:13:33.132 "memory_domains": [ 00:13:33.132 { 00:13:33.132 "dma_device_id": "system", 00:13:33.132 "dma_device_type": 1 00:13:33.132 }, 00:13:33.132 { 00:13:33.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.132 "dma_device_type": 2 00:13:33.132 } 00:13:33.132 ], 00:13:33.132 "driver_specific": {} 00:13:33.132 } 00:13:33.132 ] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 BaseBdev4 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 [ 00:13:33.132 { 00:13:33.132 "name": "BaseBdev4", 00:13:33.132 "aliases": [ 00:13:33.132 "dd0f5072-a935-4b6e-8a76-01880b46edbf" 00:13:33.132 ], 00:13:33.132 "product_name": "Malloc disk", 00:13:33.132 "block_size": 512, 00:13:33.132 "num_blocks": 65536, 00:13:33.132 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:33.132 "assigned_rate_limits": { 00:13:33.132 "rw_ios_per_sec": 0, 00:13:33.132 "rw_mbytes_per_sec": 0, 00:13:33.132 "r_mbytes_per_sec": 0, 00:13:33.132 "w_mbytes_per_sec": 0 00:13:33.132 }, 00:13:33.132 "claimed": false, 00:13:33.132 "zoned": false, 00:13:33.132 "supported_io_types": { 00:13:33.132 "read": true, 00:13:33.132 "write": true, 00:13:33.132 "unmap": true, 00:13:33.132 "flush": true, 00:13:33.132 "reset": true, 00:13:33.132 "nvme_admin": false, 00:13:33.132 "nvme_io": false, 00:13:33.132 "nvme_io_md": false, 00:13:33.132 "write_zeroes": true, 00:13:33.132 "zcopy": true, 00:13:33.132 "get_zone_info": false, 00:13:33.132 "zone_management": false, 00:13:33.132 "zone_append": false, 00:13:33.132 "compare": false, 00:13:33.132 "compare_and_write": false, 00:13:33.132 "abort": true, 00:13:33.132 "seek_hole": false, 00:13:33.132 "seek_data": false, 00:13:33.132 "copy": true, 00:13:33.132 "nvme_iov_md": false 00:13:33.132 }, 00:13:33.132 "memory_domains": [ 00:13:33.132 { 00:13:33.132 "dma_device_id": "system", 00:13:33.132 "dma_device_type": 1 00:13:33.132 }, 00:13:33.132 { 00:13:33.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.132 "dma_device_type": 2 00:13:33.132 } 00:13:33.132 ], 00:13:33.132 "driver_specific": {} 00:13:33.132 } 00:13:33.132 ] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 [2024-11-26 20:26:26.522642] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.132 [2024-11-26 20:26:26.522747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.132 [2024-11-26 20:26:26.522803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.132 [2024-11-26 20:26:26.524874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.132 [2024-11-26 20:26:26.524980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.132 "name": "Existed_Raid", 00:13:33.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.132 "strip_size_kb": 64, 00:13:33.132 "state": "configuring", 00:13:33.132 "raid_level": "raid0", 00:13:33.132 "superblock": false, 00:13:33.132 "num_base_bdevs": 4, 00:13:33.132 "num_base_bdevs_discovered": 3, 00:13:33.132 "num_base_bdevs_operational": 4, 00:13:33.132 "base_bdevs_list": [ 00:13:33.132 { 00:13:33.132 "name": "BaseBdev1", 00:13:33.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.132 "is_configured": false, 00:13:33.132 "data_offset": 0, 00:13:33.133 "data_size": 0 00:13:33.133 }, 00:13:33.133 { 00:13:33.133 "name": "BaseBdev2", 00:13:33.133 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:33.133 "is_configured": true, 00:13:33.133 "data_offset": 0, 00:13:33.133 "data_size": 65536 00:13:33.133 }, 00:13:33.133 { 00:13:33.133 "name": "BaseBdev3", 00:13:33.133 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:33.133 "is_configured": true, 00:13:33.133 "data_offset": 0, 00:13:33.133 "data_size": 65536 00:13:33.133 }, 00:13:33.133 { 00:13:33.133 "name": "BaseBdev4", 00:13:33.133 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:33.133 "is_configured": true, 00:13:33.133 "data_offset": 0, 00:13:33.133 "data_size": 65536 00:13:33.133 } 00:13:33.133 ] 00:13:33.133 }' 00:13:33.133 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.133 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.391 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:33.391 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.391 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 [2024-11-26 20:26:26.949949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 20:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.650 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.650 "name": "Existed_Raid", 00:13:33.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.650 "strip_size_kb": 64, 00:13:33.650 "state": "configuring", 00:13:33.650 "raid_level": "raid0", 00:13:33.650 "superblock": false, 00:13:33.650 "num_base_bdevs": 4, 00:13:33.650 "num_base_bdevs_discovered": 2, 00:13:33.650 "num_base_bdevs_operational": 4, 00:13:33.650 "base_bdevs_list": [ 00:13:33.650 { 00:13:33.650 "name": "BaseBdev1", 00:13:33.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.650 "is_configured": false, 00:13:33.650 "data_offset": 0, 00:13:33.650 "data_size": 0 00:13:33.650 }, 00:13:33.650 { 00:13:33.650 "name": null, 00:13:33.650 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:33.650 "is_configured": false, 00:13:33.650 "data_offset": 0, 00:13:33.650 "data_size": 65536 00:13:33.650 }, 00:13:33.650 { 00:13:33.650 "name": "BaseBdev3", 00:13:33.650 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:33.650 "is_configured": true, 00:13:33.650 "data_offset": 0, 00:13:33.650 "data_size": 65536 00:13:33.650 }, 00:13:33.650 { 00:13:33.650 "name": "BaseBdev4", 00:13:33.650 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:33.650 "is_configured": true, 00:13:33.650 "data_offset": 0, 00:13:33.650 "data_size": 65536 00:13:33.650 } 00:13:33.650 ] 00:13:33.650 }' 00:13:33.650 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.650 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.909 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.167 [2024-11-26 20:26:27.487245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.167 BaseBdev1 00:13:34.167 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.168 [ 00:13:34.168 { 00:13:34.168 "name": "BaseBdev1", 00:13:34.168 "aliases": [ 00:13:34.168 "54c99053-7ca0-498c-9a5b-b8b5f29e8426" 00:13:34.168 ], 00:13:34.168 "product_name": "Malloc disk", 00:13:34.168 "block_size": 512, 00:13:34.168 "num_blocks": 65536, 00:13:34.168 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:34.168 "assigned_rate_limits": { 00:13:34.168 "rw_ios_per_sec": 0, 00:13:34.168 "rw_mbytes_per_sec": 0, 00:13:34.168 "r_mbytes_per_sec": 0, 00:13:34.168 "w_mbytes_per_sec": 0 00:13:34.168 }, 00:13:34.168 "claimed": true, 00:13:34.168 "claim_type": "exclusive_write", 00:13:34.168 "zoned": false, 00:13:34.168 "supported_io_types": { 00:13:34.168 "read": true, 00:13:34.168 "write": true, 00:13:34.168 "unmap": true, 00:13:34.168 "flush": true, 00:13:34.168 "reset": true, 00:13:34.168 "nvme_admin": false, 00:13:34.168 "nvme_io": false, 00:13:34.168 "nvme_io_md": false, 00:13:34.168 "write_zeroes": true, 00:13:34.168 "zcopy": true, 00:13:34.168 "get_zone_info": false, 00:13:34.168 "zone_management": false, 00:13:34.168 "zone_append": false, 00:13:34.168 "compare": false, 00:13:34.168 "compare_and_write": false, 00:13:34.168 "abort": true, 00:13:34.168 "seek_hole": false, 00:13:34.168 "seek_data": false, 00:13:34.168 "copy": true, 00:13:34.168 "nvme_iov_md": false 00:13:34.168 }, 00:13:34.168 "memory_domains": [ 00:13:34.168 { 00:13:34.168 "dma_device_id": "system", 00:13:34.168 "dma_device_type": 1 00:13:34.168 }, 00:13:34.168 { 00:13:34.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.168 "dma_device_type": 2 00:13:34.168 } 00:13:34.168 ], 00:13:34.168 "driver_specific": {} 00:13:34.168 } 00:13:34.168 ] 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.168 "name": "Existed_Raid", 00:13:34.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.168 "strip_size_kb": 64, 00:13:34.168 "state": "configuring", 00:13:34.168 "raid_level": "raid0", 00:13:34.168 "superblock": false, 00:13:34.168 "num_base_bdevs": 4, 00:13:34.168 "num_base_bdevs_discovered": 3, 00:13:34.168 "num_base_bdevs_operational": 4, 00:13:34.168 "base_bdevs_list": [ 00:13:34.168 { 00:13:34.168 "name": "BaseBdev1", 00:13:34.168 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:34.168 "is_configured": true, 00:13:34.168 "data_offset": 0, 00:13:34.168 "data_size": 65536 00:13:34.168 }, 00:13:34.168 { 00:13:34.168 "name": null, 00:13:34.168 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:34.168 "is_configured": false, 00:13:34.168 "data_offset": 0, 00:13:34.168 "data_size": 65536 00:13:34.168 }, 00:13:34.168 { 00:13:34.168 "name": "BaseBdev3", 00:13:34.168 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:34.168 "is_configured": true, 00:13:34.168 "data_offset": 0, 00:13:34.168 "data_size": 65536 00:13:34.168 }, 00:13:34.168 { 00:13:34.168 "name": "BaseBdev4", 00:13:34.168 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:34.168 "is_configured": true, 00:13:34.168 "data_offset": 0, 00:13:34.168 "data_size": 65536 00:13:34.168 } 00:13:34.168 ] 00:13:34.168 }' 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.168 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.735 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.736 20:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.736 20:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.736 [2024-11-26 20:26:28.058451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.736 "name": "Existed_Raid", 00:13:34.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.736 "strip_size_kb": 64, 00:13:34.736 "state": "configuring", 00:13:34.736 "raid_level": "raid0", 00:13:34.736 "superblock": false, 00:13:34.736 "num_base_bdevs": 4, 00:13:34.736 "num_base_bdevs_discovered": 2, 00:13:34.736 "num_base_bdevs_operational": 4, 00:13:34.736 "base_bdevs_list": [ 00:13:34.736 { 00:13:34.736 "name": "BaseBdev1", 00:13:34.736 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:34.736 "is_configured": true, 00:13:34.736 "data_offset": 0, 00:13:34.736 "data_size": 65536 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "name": null, 00:13:34.736 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:34.736 "is_configured": false, 00:13:34.736 "data_offset": 0, 00:13:34.736 "data_size": 65536 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "name": null, 00:13:34.736 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:34.736 "is_configured": false, 00:13:34.736 "data_offset": 0, 00:13:34.736 "data_size": 65536 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "name": "BaseBdev4", 00:13:34.736 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:34.736 "is_configured": true, 00:13:34.736 "data_offset": 0, 00:13:34.736 "data_size": 65536 00:13:34.736 } 00:13:34.736 ] 00:13:34.736 }' 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.736 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.379 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.380 [2024-11-26 20:26:28.609495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.380 "name": "Existed_Raid", 00:13:35.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.380 "strip_size_kb": 64, 00:13:35.380 "state": "configuring", 00:13:35.380 "raid_level": "raid0", 00:13:35.380 "superblock": false, 00:13:35.380 "num_base_bdevs": 4, 00:13:35.380 "num_base_bdevs_discovered": 3, 00:13:35.380 "num_base_bdevs_operational": 4, 00:13:35.380 "base_bdevs_list": [ 00:13:35.380 { 00:13:35.380 "name": "BaseBdev1", 00:13:35.380 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:35.380 "is_configured": true, 00:13:35.380 "data_offset": 0, 00:13:35.380 "data_size": 65536 00:13:35.380 }, 00:13:35.380 { 00:13:35.380 "name": null, 00:13:35.380 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:35.380 "is_configured": false, 00:13:35.380 "data_offset": 0, 00:13:35.380 "data_size": 65536 00:13:35.380 }, 00:13:35.380 { 00:13:35.380 "name": "BaseBdev3", 00:13:35.380 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:35.380 "is_configured": true, 00:13:35.380 "data_offset": 0, 00:13:35.380 "data_size": 65536 00:13:35.380 }, 00:13:35.380 { 00:13:35.380 "name": "BaseBdev4", 00:13:35.380 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:35.380 "is_configured": true, 00:13:35.380 "data_offset": 0, 00:13:35.380 "data_size": 65536 00:13:35.380 } 00:13:35.380 ] 00:13:35.380 }' 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.380 20:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.640 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.640 [2024-11-26 20:26:29.136641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.899 "name": "Existed_Raid", 00:13:35.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.899 "strip_size_kb": 64, 00:13:35.899 "state": "configuring", 00:13:35.899 "raid_level": "raid0", 00:13:35.899 "superblock": false, 00:13:35.899 "num_base_bdevs": 4, 00:13:35.899 "num_base_bdevs_discovered": 2, 00:13:35.899 "num_base_bdevs_operational": 4, 00:13:35.899 "base_bdevs_list": [ 00:13:35.899 { 00:13:35.899 "name": null, 00:13:35.899 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:35.899 "is_configured": false, 00:13:35.899 "data_offset": 0, 00:13:35.899 "data_size": 65536 00:13:35.899 }, 00:13:35.899 { 00:13:35.899 "name": null, 00:13:35.899 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:35.899 "is_configured": false, 00:13:35.899 "data_offset": 0, 00:13:35.899 "data_size": 65536 00:13:35.899 }, 00:13:35.899 { 00:13:35.899 "name": "BaseBdev3", 00:13:35.899 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:35.899 "is_configured": true, 00:13:35.899 "data_offset": 0, 00:13:35.899 "data_size": 65536 00:13:35.899 }, 00:13:35.899 { 00:13:35.899 "name": "BaseBdev4", 00:13:35.899 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:35.899 "is_configured": true, 00:13:35.899 "data_offset": 0, 00:13:35.899 "data_size": 65536 00:13:35.899 } 00:13:35.899 ] 00:13:35.899 }' 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.899 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.157 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.157 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.157 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.157 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.416 [2024-11-26 20:26:29.755627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.416 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.417 "name": "Existed_Raid", 00:13:36.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.417 "strip_size_kb": 64, 00:13:36.417 "state": "configuring", 00:13:36.417 "raid_level": "raid0", 00:13:36.417 "superblock": false, 00:13:36.417 "num_base_bdevs": 4, 00:13:36.417 "num_base_bdevs_discovered": 3, 00:13:36.417 "num_base_bdevs_operational": 4, 00:13:36.417 "base_bdevs_list": [ 00:13:36.417 { 00:13:36.417 "name": null, 00:13:36.417 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:36.417 "is_configured": false, 00:13:36.417 "data_offset": 0, 00:13:36.417 "data_size": 65536 00:13:36.417 }, 00:13:36.417 { 00:13:36.417 "name": "BaseBdev2", 00:13:36.417 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:36.417 "is_configured": true, 00:13:36.417 "data_offset": 0, 00:13:36.417 "data_size": 65536 00:13:36.417 }, 00:13:36.417 { 00:13:36.417 "name": "BaseBdev3", 00:13:36.417 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:36.417 "is_configured": true, 00:13:36.417 "data_offset": 0, 00:13:36.417 "data_size": 65536 00:13:36.417 }, 00:13:36.417 { 00:13:36.417 "name": "BaseBdev4", 00:13:36.417 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:36.417 "is_configured": true, 00:13:36.417 "data_offset": 0, 00:13:36.417 "data_size": 65536 00:13:36.417 } 00:13:36.417 ] 00:13:36.417 }' 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.417 20:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 54c99053-7ca0-498c-9a5b-b8b5f29e8426 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 [2024-11-26 20:26:30.374950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:36.987 [2024-11-26 20:26:30.375003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:36.987 [2024-11-26 20:26:30.375012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:36.987 [2024-11-26 20:26:30.375320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:36.987 [2024-11-26 20:26:30.375481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:36.987 [2024-11-26 20:26:30.375493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:36.987 [2024-11-26 20:26:30.375750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.987 NewBaseBdev 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 [ 00:13:36.987 { 00:13:36.987 "name": "NewBaseBdev", 00:13:36.987 "aliases": [ 00:13:36.987 "54c99053-7ca0-498c-9a5b-b8b5f29e8426" 00:13:36.987 ], 00:13:36.987 "product_name": "Malloc disk", 00:13:36.987 "block_size": 512, 00:13:36.987 "num_blocks": 65536, 00:13:36.987 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:36.987 "assigned_rate_limits": { 00:13:36.987 "rw_ios_per_sec": 0, 00:13:36.987 "rw_mbytes_per_sec": 0, 00:13:36.987 "r_mbytes_per_sec": 0, 00:13:36.987 "w_mbytes_per_sec": 0 00:13:36.987 }, 00:13:36.987 "claimed": true, 00:13:36.987 "claim_type": "exclusive_write", 00:13:36.987 "zoned": false, 00:13:36.987 "supported_io_types": { 00:13:36.987 "read": true, 00:13:36.987 "write": true, 00:13:36.987 "unmap": true, 00:13:36.987 "flush": true, 00:13:36.987 "reset": true, 00:13:36.988 "nvme_admin": false, 00:13:36.988 "nvme_io": false, 00:13:36.988 "nvme_io_md": false, 00:13:36.988 "write_zeroes": true, 00:13:36.988 "zcopy": true, 00:13:36.988 "get_zone_info": false, 00:13:36.988 "zone_management": false, 00:13:36.988 "zone_append": false, 00:13:36.988 "compare": false, 00:13:36.988 "compare_and_write": false, 00:13:36.988 "abort": true, 00:13:36.988 "seek_hole": false, 00:13:36.988 "seek_data": false, 00:13:36.988 "copy": true, 00:13:36.988 "nvme_iov_md": false 00:13:36.988 }, 00:13:36.988 "memory_domains": [ 00:13:36.988 { 00:13:36.988 "dma_device_id": "system", 00:13:36.988 "dma_device_type": 1 00:13:36.988 }, 00:13:36.988 { 00:13:36.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.988 "dma_device_type": 2 00:13:36.988 } 00:13:36.988 ], 00:13:36.988 "driver_specific": {} 00:13:36.988 } 00:13:36.988 ] 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.988 "name": "Existed_Raid", 00:13:36.988 "uuid": "36873983-263b-455e-b744-b064ff1de08f", 00:13:36.988 "strip_size_kb": 64, 00:13:36.988 "state": "online", 00:13:36.988 "raid_level": "raid0", 00:13:36.988 "superblock": false, 00:13:36.988 "num_base_bdevs": 4, 00:13:36.988 "num_base_bdevs_discovered": 4, 00:13:36.988 "num_base_bdevs_operational": 4, 00:13:36.988 "base_bdevs_list": [ 00:13:36.988 { 00:13:36.988 "name": "NewBaseBdev", 00:13:36.988 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:36.988 "is_configured": true, 00:13:36.988 "data_offset": 0, 00:13:36.988 "data_size": 65536 00:13:36.988 }, 00:13:36.988 { 00:13:36.988 "name": "BaseBdev2", 00:13:36.988 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:36.988 "is_configured": true, 00:13:36.988 "data_offset": 0, 00:13:36.988 "data_size": 65536 00:13:36.988 }, 00:13:36.988 { 00:13:36.988 "name": "BaseBdev3", 00:13:36.988 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:36.988 "is_configured": true, 00:13:36.988 "data_offset": 0, 00:13:36.988 "data_size": 65536 00:13:36.988 }, 00:13:36.988 { 00:13:36.988 "name": "BaseBdev4", 00:13:36.988 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:36.988 "is_configured": true, 00:13:36.988 "data_offset": 0, 00:13:36.988 "data_size": 65536 00:13:36.988 } 00:13:36.988 ] 00:13:36.988 }' 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.988 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.559 [2024-11-26 20:26:30.894538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.559 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.559 "name": "Existed_Raid", 00:13:37.559 "aliases": [ 00:13:37.559 "36873983-263b-455e-b744-b064ff1de08f" 00:13:37.559 ], 00:13:37.559 "product_name": "Raid Volume", 00:13:37.559 "block_size": 512, 00:13:37.559 "num_blocks": 262144, 00:13:37.559 "uuid": "36873983-263b-455e-b744-b064ff1de08f", 00:13:37.559 "assigned_rate_limits": { 00:13:37.559 "rw_ios_per_sec": 0, 00:13:37.559 "rw_mbytes_per_sec": 0, 00:13:37.559 "r_mbytes_per_sec": 0, 00:13:37.559 "w_mbytes_per_sec": 0 00:13:37.559 }, 00:13:37.559 "claimed": false, 00:13:37.559 "zoned": false, 00:13:37.559 "supported_io_types": { 00:13:37.559 "read": true, 00:13:37.559 "write": true, 00:13:37.559 "unmap": true, 00:13:37.559 "flush": true, 00:13:37.559 "reset": true, 00:13:37.559 "nvme_admin": false, 00:13:37.559 "nvme_io": false, 00:13:37.559 "nvme_io_md": false, 00:13:37.559 "write_zeroes": true, 00:13:37.559 "zcopy": false, 00:13:37.559 "get_zone_info": false, 00:13:37.559 "zone_management": false, 00:13:37.559 "zone_append": false, 00:13:37.559 "compare": false, 00:13:37.559 "compare_and_write": false, 00:13:37.559 "abort": false, 00:13:37.559 "seek_hole": false, 00:13:37.559 "seek_data": false, 00:13:37.559 "copy": false, 00:13:37.559 "nvme_iov_md": false 00:13:37.559 }, 00:13:37.560 "memory_domains": [ 00:13:37.560 { 00:13:37.560 "dma_device_id": "system", 00:13:37.560 "dma_device_type": 1 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.560 "dma_device_type": 2 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "system", 00:13:37.560 "dma_device_type": 1 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.560 "dma_device_type": 2 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "system", 00:13:37.560 "dma_device_type": 1 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.560 "dma_device_type": 2 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "system", 00:13:37.560 "dma_device_type": 1 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.560 "dma_device_type": 2 00:13:37.560 } 00:13:37.560 ], 00:13:37.560 "driver_specific": { 00:13:37.560 "raid": { 00:13:37.560 "uuid": "36873983-263b-455e-b744-b064ff1de08f", 00:13:37.560 "strip_size_kb": 64, 00:13:37.560 "state": "online", 00:13:37.560 "raid_level": "raid0", 00:13:37.560 "superblock": false, 00:13:37.560 "num_base_bdevs": 4, 00:13:37.560 "num_base_bdevs_discovered": 4, 00:13:37.560 "num_base_bdevs_operational": 4, 00:13:37.560 "base_bdevs_list": [ 00:13:37.560 { 00:13:37.560 "name": "NewBaseBdev", 00:13:37.560 "uuid": "54c99053-7ca0-498c-9a5b-b8b5f29e8426", 00:13:37.560 "is_configured": true, 00:13:37.560 "data_offset": 0, 00:13:37.560 "data_size": 65536 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "name": "BaseBdev2", 00:13:37.560 "uuid": "719d86e1-e8f0-449d-bf3c-b95a64f3080c", 00:13:37.560 "is_configured": true, 00:13:37.560 "data_offset": 0, 00:13:37.560 "data_size": 65536 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "name": "BaseBdev3", 00:13:37.560 "uuid": "f5806705-a58a-4fce-8c53-a704bad28b1d", 00:13:37.560 "is_configured": true, 00:13:37.560 "data_offset": 0, 00:13:37.560 "data_size": 65536 00:13:37.560 }, 00:13:37.560 { 00:13:37.560 "name": "BaseBdev4", 00:13:37.560 "uuid": "dd0f5072-a935-4b6e-8a76-01880b46edbf", 00:13:37.560 "is_configured": true, 00:13:37.560 "data_offset": 0, 00:13:37.560 "data_size": 65536 00:13:37.560 } 00:13:37.560 ] 00:13:37.560 } 00:13:37.560 } 00:13:37.560 }' 00:13:37.560 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.560 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:37.560 BaseBdev2 00:13:37.560 BaseBdev3 00:13:37.560 BaseBdev4' 00:13:37.560 20:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.560 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.819 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.819 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.819 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 [2024-11-26 20:26:31.241543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.820 [2024-11-26 20:26:31.241638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.820 [2024-11-26 20:26:31.241761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.820 [2024-11-26 20:26:31.241869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.820 [2024-11-26 20:26:31.241919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69687 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69687 ']' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69687 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69687 00:13:37.820 killing process with pid 69687 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69687' 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69687 00:13:37.820 [2024-11-26 20:26:31.280111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.820 20:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69687 00:13:38.389 [2024-11-26 20:26:31.712602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.767 20:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:39.767 00:13:39.767 real 0m12.276s 00:13:39.767 user 0m19.553s 00:13:39.767 sys 0m2.111s 00:13:39.767 20:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.767 20:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.768 ************************************ 00:13:39.768 END TEST raid_state_function_test 00:13:39.768 ************************************ 00:13:39.768 20:26:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:39.768 20:26:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:39.768 20:26:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.768 20:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.768 ************************************ 00:13:39.768 START TEST raid_state_function_test_sb 00:13:39.768 ************************************ 00:13:39.768 20:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:13:39.768 20:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:39.768 20:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:39.768 20:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70366 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70366' 00:13:39.768 Process raid pid: 70366 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70366 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70366 ']' 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.768 20:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.768 [2024-11-26 20:26:33.107962] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:39.768 [2024-11-26 20:26:33.108187] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.768 [2024-11-26 20:26:33.285357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.027 [2024-11-26 20:26:33.406769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.287 [2024-11-26 20:26:33.627566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.287 [2024-11-26 20:26:33.627697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.547 [2024-11-26 20:26:34.016580] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.547 [2024-11-26 20:26:34.016667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.547 [2024-11-26 20:26:34.016680] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.547 [2024-11-26 20:26:34.016692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.547 [2024-11-26 20:26:34.016700] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.547 [2024-11-26 20:26:34.016710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.547 [2024-11-26 20:26:34.016718] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.547 [2024-11-26 20:26:34.016728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.547 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.547 "name": "Existed_Raid", 00:13:40.547 "uuid": "5635f589-2f90-4d28-97f7-c42d9cad3aa5", 00:13:40.547 "strip_size_kb": 64, 00:13:40.547 "state": "configuring", 00:13:40.547 "raid_level": "raid0", 00:13:40.547 "superblock": true, 00:13:40.547 "num_base_bdevs": 4, 00:13:40.548 "num_base_bdevs_discovered": 0, 00:13:40.548 "num_base_bdevs_operational": 4, 00:13:40.548 "base_bdevs_list": [ 00:13:40.548 { 00:13:40.548 "name": "BaseBdev1", 00:13:40.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.548 "is_configured": false, 00:13:40.548 "data_offset": 0, 00:13:40.548 "data_size": 0 00:13:40.548 }, 00:13:40.548 { 00:13:40.548 "name": "BaseBdev2", 00:13:40.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.548 "is_configured": false, 00:13:40.548 "data_offset": 0, 00:13:40.548 "data_size": 0 00:13:40.548 }, 00:13:40.548 { 00:13:40.548 "name": "BaseBdev3", 00:13:40.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.548 "is_configured": false, 00:13:40.548 "data_offset": 0, 00:13:40.548 "data_size": 0 00:13:40.548 }, 00:13:40.548 { 00:13:40.548 "name": "BaseBdev4", 00:13:40.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.548 "is_configured": false, 00:13:40.548 "data_offset": 0, 00:13:40.548 "data_size": 0 00:13:40.548 } 00:13:40.548 ] 00:13:40.548 }' 00:13:40.548 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.548 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 [2024-11-26 20:26:34.471704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.118 [2024-11-26 20:26:34.471811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 [2024-11-26 20:26:34.483689] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.118 [2024-11-26 20:26:34.483779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.118 [2024-11-26 20:26:34.483828] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.118 [2024-11-26 20:26:34.483856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.118 [2024-11-26 20:26:34.483878] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.118 [2024-11-26 20:26:34.483904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.118 [2024-11-26 20:26:34.483925] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:41.118 [2024-11-26 20:26:34.483985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 [2024-11-26 20:26:34.536206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.118 BaseBdev1 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 [ 00:13:41.118 { 00:13:41.118 "name": "BaseBdev1", 00:13:41.118 "aliases": [ 00:13:41.118 "c7eef39e-db87-47f4-b16b-574ccb0dc716" 00:13:41.118 ], 00:13:41.118 "product_name": "Malloc disk", 00:13:41.118 "block_size": 512, 00:13:41.118 "num_blocks": 65536, 00:13:41.118 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:41.118 "assigned_rate_limits": { 00:13:41.118 "rw_ios_per_sec": 0, 00:13:41.118 "rw_mbytes_per_sec": 0, 00:13:41.118 "r_mbytes_per_sec": 0, 00:13:41.118 "w_mbytes_per_sec": 0 00:13:41.118 }, 00:13:41.118 "claimed": true, 00:13:41.118 "claim_type": "exclusive_write", 00:13:41.118 "zoned": false, 00:13:41.118 "supported_io_types": { 00:13:41.118 "read": true, 00:13:41.118 "write": true, 00:13:41.118 "unmap": true, 00:13:41.118 "flush": true, 00:13:41.118 "reset": true, 00:13:41.118 "nvme_admin": false, 00:13:41.118 "nvme_io": false, 00:13:41.118 "nvme_io_md": false, 00:13:41.118 "write_zeroes": true, 00:13:41.118 "zcopy": true, 00:13:41.118 "get_zone_info": false, 00:13:41.118 "zone_management": false, 00:13:41.118 "zone_append": false, 00:13:41.118 "compare": false, 00:13:41.118 "compare_and_write": false, 00:13:41.118 "abort": true, 00:13:41.118 "seek_hole": false, 00:13:41.118 "seek_data": false, 00:13:41.118 "copy": true, 00:13:41.118 "nvme_iov_md": false 00:13:41.118 }, 00:13:41.118 "memory_domains": [ 00:13:41.118 { 00:13:41.118 "dma_device_id": "system", 00:13:41.118 "dma_device_type": 1 00:13:41.118 }, 00:13:41.118 { 00:13:41.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.118 "dma_device_type": 2 00:13:41.118 } 00:13:41.118 ], 00:13:41.118 "driver_specific": {} 00:13:41.118 } 00:13:41.118 ] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.118 "name": "Existed_Raid", 00:13:41.118 "uuid": "58826d19-5fa7-46cf-a467-effb086c9607", 00:13:41.118 "strip_size_kb": 64, 00:13:41.118 "state": "configuring", 00:13:41.118 "raid_level": "raid0", 00:13:41.118 "superblock": true, 00:13:41.118 "num_base_bdevs": 4, 00:13:41.118 "num_base_bdevs_discovered": 1, 00:13:41.118 "num_base_bdevs_operational": 4, 00:13:41.118 "base_bdevs_list": [ 00:13:41.118 { 00:13:41.118 "name": "BaseBdev1", 00:13:41.118 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:41.118 "is_configured": true, 00:13:41.118 "data_offset": 2048, 00:13:41.118 "data_size": 63488 00:13:41.118 }, 00:13:41.118 { 00:13:41.118 "name": "BaseBdev2", 00:13:41.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.118 "is_configured": false, 00:13:41.118 "data_offset": 0, 00:13:41.118 "data_size": 0 00:13:41.118 }, 00:13:41.118 { 00:13:41.118 "name": "BaseBdev3", 00:13:41.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.118 "is_configured": false, 00:13:41.118 "data_offset": 0, 00:13:41.118 "data_size": 0 00:13:41.118 }, 00:13:41.118 { 00:13:41.118 "name": "BaseBdev4", 00:13:41.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.118 "is_configured": false, 00:13:41.118 "data_offset": 0, 00:13:41.118 "data_size": 0 00:13:41.118 } 00:13:41.118 ] 00:13:41.118 }' 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.118 20:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.686 [2024-11-26 20:26:35.051403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.686 [2024-11-26 20:26:35.051523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.686 [2024-11-26 20:26:35.063435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.686 [2024-11-26 20:26:35.065605] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.686 [2024-11-26 20:26:35.065704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.686 [2024-11-26 20:26:35.065753] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.686 [2024-11-26 20:26:35.065787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.686 [2024-11-26 20:26:35.065857] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:41.686 [2024-11-26 20:26:35.065886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.686 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.686 "name": "Existed_Raid", 00:13:41.686 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:41.686 "strip_size_kb": 64, 00:13:41.686 "state": "configuring", 00:13:41.686 "raid_level": "raid0", 00:13:41.686 "superblock": true, 00:13:41.686 "num_base_bdevs": 4, 00:13:41.686 "num_base_bdevs_discovered": 1, 00:13:41.686 "num_base_bdevs_operational": 4, 00:13:41.686 "base_bdevs_list": [ 00:13:41.686 { 00:13:41.686 "name": "BaseBdev1", 00:13:41.686 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:41.686 "is_configured": true, 00:13:41.686 "data_offset": 2048, 00:13:41.687 "data_size": 63488 00:13:41.687 }, 00:13:41.687 { 00:13:41.687 "name": "BaseBdev2", 00:13:41.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.687 "is_configured": false, 00:13:41.687 "data_offset": 0, 00:13:41.687 "data_size": 0 00:13:41.687 }, 00:13:41.687 { 00:13:41.687 "name": "BaseBdev3", 00:13:41.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.687 "is_configured": false, 00:13:41.687 "data_offset": 0, 00:13:41.687 "data_size": 0 00:13:41.687 }, 00:13:41.687 { 00:13:41.687 "name": "BaseBdev4", 00:13:41.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.687 "is_configured": false, 00:13:41.687 "data_offset": 0, 00:13:41.687 "data_size": 0 00:13:41.687 } 00:13:41.687 ] 00:13:41.687 }' 00:13:41.687 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.687 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 [2024-11-26 20:26:35.605688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.254 BaseBdev2 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 [ 00:13:42.254 { 00:13:42.254 "name": "BaseBdev2", 00:13:42.254 "aliases": [ 00:13:42.254 "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602" 00:13:42.254 ], 00:13:42.254 "product_name": "Malloc disk", 00:13:42.254 "block_size": 512, 00:13:42.254 "num_blocks": 65536, 00:13:42.254 "uuid": "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602", 00:13:42.254 "assigned_rate_limits": { 00:13:42.254 "rw_ios_per_sec": 0, 00:13:42.254 "rw_mbytes_per_sec": 0, 00:13:42.254 "r_mbytes_per_sec": 0, 00:13:42.254 "w_mbytes_per_sec": 0 00:13:42.254 }, 00:13:42.254 "claimed": true, 00:13:42.254 "claim_type": "exclusive_write", 00:13:42.254 "zoned": false, 00:13:42.254 "supported_io_types": { 00:13:42.254 "read": true, 00:13:42.254 "write": true, 00:13:42.254 "unmap": true, 00:13:42.254 "flush": true, 00:13:42.254 "reset": true, 00:13:42.254 "nvme_admin": false, 00:13:42.254 "nvme_io": false, 00:13:42.254 "nvme_io_md": false, 00:13:42.254 "write_zeroes": true, 00:13:42.254 "zcopy": true, 00:13:42.254 "get_zone_info": false, 00:13:42.254 "zone_management": false, 00:13:42.254 "zone_append": false, 00:13:42.254 "compare": false, 00:13:42.254 "compare_and_write": false, 00:13:42.254 "abort": true, 00:13:42.254 "seek_hole": false, 00:13:42.254 "seek_data": false, 00:13:42.254 "copy": true, 00:13:42.254 "nvme_iov_md": false 00:13:42.254 }, 00:13:42.254 "memory_domains": [ 00:13:42.254 { 00:13:42.254 "dma_device_id": "system", 00:13:42.254 "dma_device_type": 1 00:13:42.254 }, 00:13:42.254 { 00:13:42.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.254 "dma_device_type": 2 00:13:42.254 } 00:13:42.254 ], 00:13:42.254 "driver_specific": {} 00:13:42.254 } 00:13:42.254 ] 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.254 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.254 "name": "Existed_Raid", 00:13:42.254 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:42.254 "strip_size_kb": 64, 00:13:42.254 "state": "configuring", 00:13:42.254 "raid_level": "raid0", 00:13:42.254 "superblock": true, 00:13:42.254 "num_base_bdevs": 4, 00:13:42.254 "num_base_bdevs_discovered": 2, 00:13:42.254 "num_base_bdevs_operational": 4, 00:13:42.254 "base_bdevs_list": [ 00:13:42.254 { 00:13:42.254 "name": "BaseBdev1", 00:13:42.254 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:42.254 "is_configured": true, 00:13:42.254 "data_offset": 2048, 00:13:42.254 "data_size": 63488 00:13:42.254 }, 00:13:42.254 { 00:13:42.255 "name": "BaseBdev2", 00:13:42.255 "uuid": "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602", 00:13:42.255 "is_configured": true, 00:13:42.255 "data_offset": 2048, 00:13:42.255 "data_size": 63488 00:13:42.255 }, 00:13:42.255 { 00:13:42.255 "name": "BaseBdev3", 00:13:42.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.255 "is_configured": false, 00:13:42.255 "data_offset": 0, 00:13:42.255 "data_size": 0 00:13:42.255 }, 00:13:42.255 { 00:13:42.255 "name": "BaseBdev4", 00:13:42.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.255 "is_configured": false, 00:13:42.255 "data_offset": 0, 00:13:42.255 "data_size": 0 00:13:42.255 } 00:13:42.255 ] 00:13:42.255 }' 00:13:42.255 20:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.255 20:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.824 [2024-11-26 20:26:36.175385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.824 BaseBdev3 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.824 [ 00:13:42.824 { 00:13:42.824 "name": "BaseBdev3", 00:13:42.824 "aliases": [ 00:13:42.824 "ceadd8b0-8fdf-4652-a64a-6cbb31944254" 00:13:42.824 ], 00:13:42.824 "product_name": "Malloc disk", 00:13:42.824 "block_size": 512, 00:13:42.824 "num_blocks": 65536, 00:13:42.824 "uuid": "ceadd8b0-8fdf-4652-a64a-6cbb31944254", 00:13:42.824 "assigned_rate_limits": { 00:13:42.824 "rw_ios_per_sec": 0, 00:13:42.824 "rw_mbytes_per_sec": 0, 00:13:42.824 "r_mbytes_per_sec": 0, 00:13:42.824 "w_mbytes_per_sec": 0 00:13:42.824 }, 00:13:42.824 "claimed": true, 00:13:42.824 "claim_type": "exclusive_write", 00:13:42.824 "zoned": false, 00:13:42.824 "supported_io_types": { 00:13:42.824 "read": true, 00:13:42.824 "write": true, 00:13:42.824 "unmap": true, 00:13:42.824 "flush": true, 00:13:42.824 "reset": true, 00:13:42.824 "nvme_admin": false, 00:13:42.824 "nvme_io": false, 00:13:42.824 "nvme_io_md": false, 00:13:42.824 "write_zeroes": true, 00:13:42.824 "zcopy": true, 00:13:42.824 "get_zone_info": false, 00:13:42.824 "zone_management": false, 00:13:42.824 "zone_append": false, 00:13:42.824 "compare": false, 00:13:42.824 "compare_and_write": false, 00:13:42.824 "abort": true, 00:13:42.824 "seek_hole": false, 00:13:42.824 "seek_data": false, 00:13:42.824 "copy": true, 00:13:42.824 "nvme_iov_md": false 00:13:42.824 }, 00:13:42.824 "memory_domains": [ 00:13:42.824 { 00:13:42.824 "dma_device_id": "system", 00:13:42.824 "dma_device_type": 1 00:13:42.824 }, 00:13:42.824 { 00:13:42.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.824 "dma_device_type": 2 00:13:42.824 } 00:13:42.824 ], 00:13:42.824 "driver_specific": {} 00:13:42.824 } 00:13:42.824 ] 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.824 "name": "Existed_Raid", 00:13:42.824 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:42.824 "strip_size_kb": 64, 00:13:42.824 "state": "configuring", 00:13:42.824 "raid_level": "raid0", 00:13:42.824 "superblock": true, 00:13:42.824 "num_base_bdevs": 4, 00:13:42.824 "num_base_bdevs_discovered": 3, 00:13:42.824 "num_base_bdevs_operational": 4, 00:13:42.824 "base_bdevs_list": [ 00:13:42.824 { 00:13:42.824 "name": "BaseBdev1", 00:13:42.824 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:42.824 "is_configured": true, 00:13:42.824 "data_offset": 2048, 00:13:42.824 "data_size": 63488 00:13:42.824 }, 00:13:42.824 { 00:13:42.824 "name": "BaseBdev2", 00:13:42.824 "uuid": "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602", 00:13:42.824 "is_configured": true, 00:13:42.824 "data_offset": 2048, 00:13:42.824 "data_size": 63488 00:13:42.824 }, 00:13:42.824 { 00:13:42.824 "name": "BaseBdev3", 00:13:42.824 "uuid": "ceadd8b0-8fdf-4652-a64a-6cbb31944254", 00:13:42.824 "is_configured": true, 00:13:42.824 "data_offset": 2048, 00:13:42.824 "data_size": 63488 00:13:42.824 }, 00:13:42.824 { 00:13:42.824 "name": "BaseBdev4", 00:13:42.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.824 "is_configured": false, 00:13:42.824 "data_offset": 0, 00:13:42.824 "data_size": 0 00:13:42.824 } 00:13:42.824 ] 00:13:42.824 }' 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.824 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.391 BaseBdev4 00:13:43.391 [2024-11-26 20:26:36.727235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:43.391 [2024-11-26 20:26:36.727600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:43.391 [2024-11-26 20:26:36.727620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:43.391 [2024-11-26 20:26:36.727925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:43.391 [2024-11-26 20:26:36.728086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:43.391 [2024-11-26 20:26:36.728106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:43.391 [2024-11-26 20:26:36.728304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.391 [ 00:13:43.391 { 00:13:43.391 "name": "BaseBdev4", 00:13:43.391 "aliases": [ 00:13:43.391 "a8ab20a5-f07f-46c2-a007-7065b3652837" 00:13:43.391 ], 00:13:43.391 "product_name": "Malloc disk", 00:13:43.391 "block_size": 512, 00:13:43.391 "num_blocks": 65536, 00:13:43.391 "uuid": "a8ab20a5-f07f-46c2-a007-7065b3652837", 00:13:43.391 "assigned_rate_limits": { 00:13:43.391 "rw_ios_per_sec": 0, 00:13:43.391 "rw_mbytes_per_sec": 0, 00:13:43.391 "r_mbytes_per_sec": 0, 00:13:43.391 "w_mbytes_per_sec": 0 00:13:43.391 }, 00:13:43.391 "claimed": true, 00:13:43.391 "claim_type": "exclusive_write", 00:13:43.391 "zoned": false, 00:13:43.391 "supported_io_types": { 00:13:43.391 "read": true, 00:13:43.391 "write": true, 00:13:43.391 "unmap": true, 00:13:43.391 "flush": true, 00:13:43.391 "reset": true, 00:13:43.391 "nvme_admin": false, 00:13:43.391 "nvme_io": false, 00:13:43.391 "nvme_io_md": false, 00:13:43.391 "write_zeroes": true, 00:13:43.391 "zcopy": true, 00:13:43.391 "get_zone_info": false, 00:13:43.391 "zone_management": false, 00:13:43.391 "zone_append": false, 00:13:43.391 "compare": false, 00:13:43.391 "compare_and_write": false, 00:13:43.391 "abort": true, 00:13:43.391 "seek_hole": false, 00:13:43.391 "seek_data": false, 00:13:43.391 "copy": true, 00:13:43.391 "nvme_iov_md": false 00:13:43.391 }, 00:13:43.391 "memory_domains": [ 00:13:43.391 { 00:13:43.391 "dma_device_id": "system", 00:13:43.391 "dma_device_type": 1 00:13:43.391 }, 00:13:43.391 { 00:13:43.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.391 "dma_device_type": 2 00:13:43.391 } 00:13:43.391 ], 00:13:43.391 "driver_specific": {} 00:13:43.391 } 00:13:43.391 ] 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:43.391 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.392 "name": "Existed_Raid", 00:13:43.392 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:43.392 "strip_size_kb": 64, 00:13:43.392 "state": "online", 00:13:43.392 "raid_level": "raid0", 00:13:43.392 "superblock": true, 00:13:43.392 "num_base_bdevs": 4, 00:13:43.392 "num_base_bdevs_discovered": 4, 00:13:43.392 "num_base_bdevs_operational": 4, 00:13:43.392 "base_bdevs_list": [ 00:13:43.392 { 00:13:43.392 "name": "BaseBdev1", 00:13:43.392 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:43.392 "is_configured": true, 00:13:43.392 "data_offset": 2048, 00:13:43.392 "data_size": 63488 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "name": "BaseBdev2", 00:13:43.392 "uuid": "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602", 00:13:43.392 "is_configured": true, 00:13:43.392 "data_offset": 2048, 00:13:43.392 "data_size": 63488 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "name": "BaseBdev3", 00:13:43.392 "uuid": "ceadd8b0-8fdf-4652-a64a-6cbb31944254", 00:13:43.392 "is_configured": true, 00:13:43.392 "data_offset": 2048, 00:13:43.392 "data_size": 63488 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "name": "BaseBdev4", 00:13:43.392 "uuid": "a8ab20a5-f07f-46c2-a007-7065b3652837", 00:13:43.392 "is_configured": true, 00:13:43.392 "data_offset": 2048, 00:13:43.392 "data_size": 63488 00:13:43.392 } 00:13:43.392 ] 00:13:43.392 }' 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.392 20:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.958 [2024-11-26 20:26:37.246830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.958 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.958 "name": "Existed_Raid", 00:13:43.958 "aliases": [ 00:13:43.958 "2bc72805-9ffc-4f27-a1d8-994562b7394a" 00:13:43.958 ], 00:13:43.958 "product_name": "Raid Volume", 00:13:43.958 "block_size": 512, 00:13:43.958 "num_blocks": 253952, 00:13:43.958 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:43.958 "assigned_rate_limits": { 00:13:43.958 "rw_ios_per_sec": 0, 00:13:43.958 "rw_mbytes_per_sec": 0, 00:13:43.958 "r_mbytes_per_sec": 0, 00:13:43.958 "w_mbytes_per_sec": 0 00:13:43.958 }, 00:13:43.958 "claimed": false, 00:13:43.958 "zoned": false, 00:13:43.958 "supported_io_types": { 00:13:43.958 "read": true, 00:13:43.958 "write": true, 00:13:43.958 "unmap": true, 00:13:43.958 "flush": true, 00:13:43.958 "reset": true, 00:13:43.958 "nvme_admin": false, 00:13:43.958 "nvme_io": false, 00:13:43.958 "nvme_io_md": false, 00:13:43.958 "write_zeroes": true, 00:13:43.958 "zcopy": false, 00:13:43.958 "get_zone_info": false, 00:13:43.958 "zone_management": false, 00:13:43.958 "zone_append": false, 00:13:43.958 "compare": false, 00:13:43.958 "compare_and_write": false, 00:13:43.958 "abort": false, 00:13:43.958 "seek_hole": false, 00:13:43.958 "seek_data": false, 00:13:43.958 "copy": false, 00:13:43.958 "nvme_iov_md": false 00:13:43.958 }, 00:13:43.958 "memory_domains": [ 00:13:43.958 { 00:13:43.958 "dma_device_id": "system", 00:13:43.958 "dma_device_type": 1 00:13:43.958 }, 00:13:43.958 { 00:13:43.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.958 "dma_device_type": 2 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "dma_device_id": "system", 00:13:43.959 "dma_device_type": 1 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.959 "dma_device_type": 2 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "dma_device_id": "system", 00:13:43.959 "dma_device_type": 1 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.959 "dma_device_type": 2 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "dma_device_id": "system", 00:13:43.959 "dma_device_type": 1 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.959 "dma_device_type": 2 00:13:43.959 } 00:13:43.959 ], 00:13:43.959 "driver_specific": { 00:13:43.959 "raid": { 00:13:43.959 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:43.959 "strip_size_kb": 64, 00:13:43.959 "state": "online", 00:13:43.959 "raid_level": "raid0", 00:13:43.959 "superblock": true, 00:13:43.959 "num_base_bdevs": 4, 00:13:43.959 "num_base_bdevs_discovered": 4, 00:13:43.959 "num_base_bdevs_operational": 4, 00:13:43.959 "base_bdevs_list": [ 00:13:43.959 { 00:13:43.959 "name": "BaseBdev1", 00:13:43.959 "uuid": "c7eef39e-db87-47f4-b16b-574ccb0dc716", 00:13:43.959 "is_configured": true, 00:13:43.959 "data_offset": 2048, 00:13:43.959 "data_size": 63488 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "name": "BaseBdev2", 00:13:43.959 "uuid": "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602", 00:13:43.959 "is_configured": true, 00:13:43.959 "data_offset": 2048, 00:13:43.959 "data_size": 63488 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "name": "BaseBdev3", 00:13:43.959 "uuid": "ceadd8b0-8fdf-4652-a64a-6cbb31944254", 00:13:43.959 "is_configured": true, 00:13:43.959 "data_offset": 2048, 00:13:43.959 "data_size": 63488 00:13:43.959 }, 00:13:43.959 { 00:13:43.959 "name": "BaseBdev4", 00:13:43.959 "uuid": "a8ab20a5-f07f-46c2-a007-7065b3652837", 00:13:43.959 "is_configured": true, 00:13:43.959 "data_offset": 2048, 00:13:43.959 "data_size": 63488 00:13:43.959 } 00:13:43.959 ] 00:13:43.959 } 00:13:43.959 } 00:13:43.959 }' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.959 BaseBdev2 00:13:43.959 BaseBdev3 00:13:43.959 BaseBdev4' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.959 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.218 [2024-11-26 20:26:37.573958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:44.218 [2024-11-26 20:26:37.574041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.218 [2024-11-26 20:26:37.574135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.218 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.219 "name": "Existed_Raid", 00:13:44.219 "uuid": "2bc72805-9ffc-4f27-a1d8-994562b7394a", 00:13:44.219 "strip_size_kb": 64, 00:13:44.219 "state": "offline", 00:13:44.219 "raid_level": "raid0", 00:13:44.219 "superblock": true, 00:13:44.219 "num_base_bdevs": 4, 00:13:44.219 "num_base_bdevs_discovered": 3, 00:13:44.219 "num_base_bdevs_operational": 3, 00:13:44.219 "base_bdevs_list": [ 00:13:44.219 { 00:13:44.219 "name": null, 00:13:44.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.219 "is_configured": false, 00:13:44.219 "data_offset": 0, 00:13:44.219 "data_size": 63488 00:13:44.219 }, 00:13:44.219 { 00:13:44.219 "name": "BaseBdev2", 00:13:44.219 "uuid": "18ab8ab9-5e52-4cce-a8f0-eeba1ecfe602", 00:13:44.219 "is_configured": true, 00:13:44.219 "data_offset": 2048, 00:13:44.219 "data_size": 63488 00:13:44.219 }, 00:13:44.219 { 00:13:44.219 "name": "BaseBdev3", 00:13:44.219 "uuid": "ceadd8b0-8fdf-4652-a64a-6cbb31944254", 00:13:44.219 "is_configured": true, 00:13:44.219 "data_offset": 2048, 00:13:44.219 "data_size": 63488 00:13:44.219 }, 00:13:44.219 { 00:13:44.219 "name": "BaseBdev4", 00:13:44.219 "uuid": "a8ab20a5-f07f-46c2-a007-7065b3652837", 00:13:44.219 "is_configured": true, 00:13:44.219 "data_offset": 2048, 00:13:44.219 "data_size": 63488 00:13:44.219 } 00:13:44.219 ] 00:13:44.219 }' 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.219 20:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.785 [2024-11-26 20:26:38.183676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.785 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.043 [2024-11-26 20:26:38.350273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.043 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.044 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.044 [2024-11-26 20:26:38.509217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:45.044 [2024-11-26 20:26:38.509295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 BaseBdev2 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 [ 00:13:45.303 { 00:13:45.303 "name": "BaseBdev2", 00:13:45.303 "aliases": [ 00:13:45.303 "16347174-f6ee-4e33-96ee-687405f91854" 00:13:45.303 ], 00:13:45.303 "product_name": "Malloc disk", 00:13:45.303 "block_size": 512, 00:13:45.303 "num_blocks": 65536, 00:13:45.303 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:45.303 "assigned_rate_limits": { 00:13:45.303 "rw_ios_per_sec": 0, 00:13:45.303 "rw_mbytes_per_sec": 0, 00:13:45.303 "r_mbytes_per_sec": 0, 00:13:45.303 "w_mbytes_per_sec": 0 00:13:45.303 }, 00:13:45.303 "claimed": false, 00:13:45.303 "zoned": false, 00:13:45.303 "supported_io_types": { 00:13:45.303 "read": true, 00:13:45.303 "write": true, 00:13:45.303 "unmap": true, 00:13:45.303 "flush": true, 00:13:45.303 "reset": true, 00:13:45.303 "nvme_admin": false, 00:13:45.303 "nvme_io": false, 00:13:45.303 "nvme_io_md": false, 00:13:45.303 "write_zeroes": true, 00:13:45.303 "zcopy": true, 00:13:45.303 "get_zone_info": false, 00:13:45.303 "zone_management": false, 00:13:45.303 "zone_append": false, 00:13:45.303 "compare": false, 00:13:45.303 "compare_and_write": false, 00:13:45.303 "abort": true, 00:13:45.303 "seek_hole": false, 00:13:45.303 "seek_data": false, 00:13:45.303 "copy": true, 00:13:45.303 "nvme_iov_md": false 00:13:45.303 }, 00:13:45.303 "memory_domains": [ 00:13:45.303 { 00:13:45.303 "dma_device_id": "system", 00:13:45.303 "dma_device_type": 1 00:13:45.303 }, 00:13:45.303 { 00:13:45.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.303 "dma_device_type": 2 00:13:45.303 } 00:13:45.303 ], 00:13:45.303 "driver_specific": {} 00:13:45.303 } 00:13:45.303 ] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 BaseBdev3 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:45.303 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.304 [ 00:13:45.304 { 00:13:45.304 "name": "BaseBdev3", 00:13:45.304 "aliases": [ 00:13:45.304 "8007f262-7e78-4e4f-9573-59e6ab3ab7a5" 00:13:45.304 ], 00:13:45.304 "product_name": "Malloc disk", 00:13:45.304 "block_size": 512, 00:13:45.304 "num_blocks": 65536, 00:13:45.304 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:45.304 "assigned_rate_limits": { 00:13:45.304 "rw_ios_per_sec": 0, 00:13:45.304 "rw_mbytes_per_sec": 0, 00:13:45.304 "r_mbytes_per_sec": 0, 00:13:45.304 "w_mbytes_per_sec": 0 00:13:45.304 }, 00:13:45.304 "claimed": false, 00:13:45.304 "zoned": false, 00:13:45.304 "supported_io_types": { 00:13:45.304 "read": true, 00:13:45.304 "write": true, 00:13:45.304 "unmap": true, 00:13:45.304 "flush": true, 00:13:45.304 "reset": true, 00:13:45.304 "nvme_admin": false, 00:13:45.304 "nvme_io": false, 00:13:45.304 "nvme_io_md": false, 00:13:45.304 "write_zeroes": true, 00:13:45.304 "zcopy": true, 00:13:45.304 "get_zone_info": false, 00:13:45.304 "zone_management": false, 00:13:45.304 "zone_append": false, 00:13:45.304 "compare": false, 00:13:45.304 "compare_and_write": false, 00:13:45.304 "abort": true, 00:13:45.304 "seek_hole": false, 00:13:45.304 "seek_data": false, 00:13:45.304 "copy": true, 00:13:45.304 "nvme_iov_md": false 00:13:45.304 }, 00:13:45.304 "memory_domains": [ 00:13:45.304 { 00:13:45.304 "dma_device_id": "system", 00:13:45.304 "dma_device_type": 1 00:13:45.304 }, 00:13:45.304 { 00:13:45.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.304 "dma_device_type": 2 00:13:45.304 } 00:13:45.304 ], 00:13:45.304 "driver_specific": {} 00:13:45.304 } 00:13:45.304 ] 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.304 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.563 BaseBdev4 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.563 [ 00:13:45.563 { 00:13:45.563 "name": "BaseBdev4", 00:13:45.563 "aliases": [ 00:13:45.563 "4fc90ff3-6e13-4471-8c55-2f5d3540a375" 00:13:45.563 ], 00:13:45.563 "product_name": "Malloc disk", 00:13:45.563 "block_size": 512, 00:13:45.563 "num_blocks": 65536, 00:13:45.563 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:45.563 "assigned_rate_limits": { 00:13:45.563 "rw_ios_per_sec": 0, 00:13:45.563 "rw_mbytes_per_sec": 0, 00:13:45.563 "r_mbytes_per_sec": 0, 00:13:45.563 "w_mbytes_per_sec": 0 00:13:45.563 }, 00:13:45.563 "claimed": false, 00:13:45.563 "zoned": false, 00:13:45.563 "supported_io_types": { 00:13:45.563 "read": true, 00:13:45.563 "write": true, 00:13:45.563 "unmap": true, 00:13:45.563 "flush": true, 00:13:45.563 "reset": true, 00:13:45.563 "nvme_admin": false, 00:13:45.563 "nvme_io": false, 00:13:45.563 "nvme_io_md": false, 00:13:45.563 "write_zeroes": true, 00:13:45.563 "zcopy": true, 00:13:45.563 "get_zone_info": false, 00:13:45.563 "zone_management": false, 00:13:45.563 "zone_append": false, 00:13:45.563 "compare": false, 00:13:45.563 "compare_and_write": false, 00:13:45.563 "abort": true, 00:13:45.563 "seek_hole": false, 00:13:45.563 "seek_data": false, 00:13:45.563 "copy": true, 00:13:45.563 "nvme_iov_md": false 00:13:45.563 }, 00:13:45.563 "memory_domains": [ 00:13:45.563 { 00:13:45.563 "dma_device_id": "system", 00:13:45.563 "dma_device_type": 1 00:13:45.563 }, 00:13:45.563 { 00:13:45.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.563 "dma_device_type": 2 00:13:45.563 } 00:13:45.563 ], 00:13:45.563 "driver_specific": {} 00:13:45.563 } 00:13:45.563 ] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.563 [2024-11-26 20:26:38.925061] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.563 [2024-11-26 20:26:38.925184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.563 [2024-11-26 20:26:38.925279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.563 [2024-11-26 20:26:38.927423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.563 [2024-11-26 20:26:38.927540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.563 "name": "Existed_Raid", 00:13:45.563 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:45.563 "strip_size_kb": 64, 00:13:45.563 "state": "configuring", 00:13:45.563 "raid_level": "raid0", 00:13:45.563 "superblock": true, 00:13:45.563 "num_base_bdevs": 4, 00:13:45.563 "num_base_bdevs_discovered": 3, 00:13:45.563 "num_base_bdevs_operational": 4, 00:13:45.563 "base_bdevs_list": [ 00:13:45.563 { 00:13:45.563 "name": "BaseBdev1", 00:13:45.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.563 "is_configured": false, 00:13:45.563 "data_offset": 0, 00:13:45.563 "data_size": 0 00:13:45.563 }, 00:13:45.563 { 00:13:45.563 "name": "BaseBdev2", 00:13:45.563 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:45.563 "is_configured": true, 00:13:45.563 "data_offset": 2048, 00:13:45.563 "data_size": 63488 00:13:45.563 }, 00:13:45.563 { 00:13:45.563 "name": "BaseBdev3", 00:13:45.563 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:45.563 "is_configured": true, 00:13:45.563 "data_offset": 2048, 00:13:45.563 "data_size": 63488 00:13:45.563 }, 00:13:45.563 { 00:13:45.563 "name": "BaseBdev4", 00:13:45.563 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:45.563 "is_configured": true, 00:13:45.563 "data_offset": 2048, 00:13:45.563 "data_size": 63488 00:13:45.563 } 00:13:45.563 ] 00:13:45.563 }' 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.563 20:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.136 [2024-11-26 20:26:39.412295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.136 "name": "Existed_Raid", 00:13:46.136 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:46.136 "strip_size_kb": 64, 00:13:46.136 "state": "configuring", 00:13:46.136 "raid_level": "raid0", 00:13:46.136 "superblock": true, 00:13:46.136 "num_base_bdevs": 4, 00:13:46.136 "num_base_bdevs_discovered": 2, 00:13:46.136 "num_base_bdevs_operational": 4, 00:13:46.136 "base_bdevs_list": [ 00:13:46.136 { 00:13:46.136 "name": "BaseBdev1", 00:13:46.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.136 "is_configured": false, 00:13:46.136 "data_offset": 0, 00:13:46.136 "data_size": 0 00:13:46.136 }, 00:13:46.136 { 00:13:46.136 "name": null, 00:13:46.136 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:46.136 "is_configured": false, 00:13:46.136 "data_offset": 0, 00:13:46.136 "data_size": 63488 00:13:46.136 }, 00:13:46.136 { 00:13:46.136 "name": "BaseBdev3", 00:13:46.136 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:46.136 "is_configured": true, 00:13:46.136 "data_offset": 2048, 00:13:46.136 "data_size": 63488 00:13:46.136 }, 00:13:46.136 { 00:13:46.136 "name": "BaseBdev4", 00:13:46.136 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:46.136 "is_configured": true, 00:13:46.136 "data_offset": 2048, 00:13:46.136 "data_size": 63488 00:13:46.136 } 00:13:46.136 ] 00:13:46.136 }' 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.136 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.395 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.395 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.395 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.653 [2024-11-26 20:26:39.996215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.653 BaseBdev1 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.653 20:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.653 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.654 [ 00:13:46.654 { 00:13:46.654 "name": "BaseBdev1", 00:13:46.654 "aliases": [ 00:13:46.654 "036e9f08-884d-48ee-8b7d-583df2072008" 00:13:46.654 ], 00:13:46.654 "product_name": "Malloc disk", 00:13:46.654 "block_size": 512, 00:13:46.654 "num_blocks": 65536, 00:13:46.654 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:46.654 "assigned_rate_limits": { 00:13:46.654 "rw_ios_per_sec": 0, 00:13:46.654 "rw_mbytes_per_sec": 0, 00:13:46.654 "r_mbytes_per_sec": 0, 00:13:46.654 "w_mbytes_per_sec": 0 00:13:46.654 }, 00:13:46.654 "claimed": true, 00:13:46.654 "claim_type": "exclusive_write", 00:13:46.654 "zoned": false, 00:13:46.654 "supported_io_types": { 00:13:46.654 "read": true, 00:13:46.654 "write": true, 00:13:46.654 "unmap": true, 00:13:46.654 "flush": true, 00:13:46.654 "reset": true, 00:13:46.654 "nvme_admin": false, 00:13:46.654 "nvme_io": false, 00:13:46.654 "nvme_io_md": false, 00:13:46.654 "write_zeroes": true, 00:13:46.654 "zcopy": true, 00:13:46.654 "get_zone_info": false, 00:13:46.654 "zone_management": false, 00:13:46.654 "zone_append": false, 00:13:46.654 "compare": false, 00:13:46.654 "compare_and_write": false, 00:13:46.654 "abort": true, 00:13:46.654 "seek_hole": false, 00:13:46.654 "seek_data": false, 00:13:46.654 "copy": true, 00:13:46.654 "nvme_iov_md": false 00:13:46.654 }, 00:13:46.654 "memory_domains": [ 00:13:46.654 { 00:13:46.654 "dma_device_id": "system", 00:13:46.654 "dma_device_type": 1 00:13:46.654 }, 00:13:46.654 { 00:13:46.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.654 "dma_device_type": 2 00:13:46.654 } 00:13:46.654 ], 00:13:46.654 "driver_specific": {} 00:13:46.654 } 00:13:46.654 ] 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.654 "name": "Existed_Raid", 00:13:46.654 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:46.654 "strip_size_kb": 64, 00:13:46.654 "state": "configuring", 00:13:46.654 "raid_level": "raid0", 00:13:46.654 "superblock": true, 00:13:46.654 "num_base_bdevs": 4, 00:13:46.654 "num_base_bdevs_discovered": 3, 00:13:46.654 "num_base_bdevs_operational": 4, 00:13:46.654 "base_bdevs_list": [ 00:13:46.654 { 00:13:46.654 "name": "BaseBdev1", 00:13:46.654 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:46.654 "is_configured": true, 00:13:46.654 "data_offset": 2048, 00:13:46.654 "data_size": 63488 00:13:46.654 }, 00:13:46.654 { 00:13:46.654 "name": null, 00:13:46.654 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:46.654 "is_configured": false, 00:13:46.654 "data_offset": 0, 00:13:46.654 "data_size": 63488 00:13:46.654 }, 00:13:46.654 { 00:13:46.654 "name": "BaseBdev3", 00:13:46.654 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:46.654 "is_configured": true, 00:13:46.654 "data_offset": 2048, 00:13:46.654 "data_size": 63488 00:13:46.654 }, 00:13:46.654 { 00:13:46.654 "name": "BaseBdev4", 00:13:46.654 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:46.654 "is_configured": true, 00:13:46.654 "data_offset": 2048, 00:13:46.654 "data_size": 63488 00:13:46.654 } 00:13:46.654 ] 00:13:46.654 }' 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.654 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.222 [2024-11-26 20:26:40.539424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.222 "name": "Existed_Raid", 00:13:47.222 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:47.222 "strip_size_kb": 64, 00:13:47.222 "state": "configuring", 00:13:47.222 "raid_level": "raid0", 00:13:47.222 "superblock": true, 00:13:47.222 "num_base_bdevs": 4, 00:13:47.222 "num_base_bdevs_discovered": 2, 00:13:47.222 "num_base_bdevs_operational": 4, 00:13:47.222 "base_bdevs_list": [ 00:13:47.222 { 00:13:47.222 "name": "BaseBdev1", 00:13:47.222 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:47.222 "is_configured": true, 00:13:47.222 "data_offset": 2048, 00:13:47.222 "data_size": 63488 00:13:47.222 }, 00:13:47.222 { 00:13:47.222 "name": null, 00:13:47.222 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:47.222 "is_configured": false, 00:13:47.222 "data_offset": 0, 00:13:47.222 "data_size": 63488 00:13:47.222 }, 00:13:47.222 { 00:13:47.222 "name": null, 00:13:47.222 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:47.222 "is_configured": false, 00:13:47.222 "data_offset": 0, 00:13:47.222 "data_size": 63488 00:13:47.222 }, 00:13:47.222 { 00:13:47.222 "name": "BaseBdev4", 00:13:47.222 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:47.222 "is_configured": true, 00:13:47.222 "data_offset": 2048, 00:13:47.222 "data_size": 63488 00:13:47.222 } 00:13:47.222 ] 00:13:47.222 }' 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.222 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.483 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.483 20:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.483 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.483 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.483 20:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.483 [2024-11-26 20:26:41.014584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.483 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.742 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.742 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.742 "name": "Existed_Raid", 00:13:47.742 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:47.742 "strip_size_kb": 64, 00:13:47.742 "state": "configuring", 00:13:47.742 "raid_level": "raid0", 00:13:47.742 "superblock": true, 00:13:47.742 "num_base_bdevs": 4, 00:13:47.742 "num_base_bdevs_discovered": 3, 00:13:47.742 "num_base_bdevs_operational": 4, 00:13:47.742 "base_bdevs_list": [ 00:13:47.742 { 00:13:47.742 "name": "BaseBdev1", 00:13:47.742 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:47.742 "is_configured": true, 00:13:47.742 "data_offset": 2048, 00:13:47.742 "data_size": 63488 00:13:47.742 }, 00:13:47.742 { 00:13:47.742 "name": null, 00:13:47.742 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:47.742 "is_configured": false, 00:13:47.742 "data_offset": 0, 00:13:47.742 "data_size": 63488 00:13:47.742 }, 00:13:47.742 { 00:13:47.742 "name": "BaseBdev3", 00:13:47.742 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:47.742 "is_configured": true, 00:13:47.742 "data_offset": 2048, 00:13:47.742 "data_size": 63488 00:13:47.742 }, 00:13:47.742 { 00:13:47.742 "name": "BaseBdev4", 00:13:47.743 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:47.743 "is_configured": true, 00:13:47.743 "data_offset": 2048, 00:13:47.743 "data_size": 63488 00:13:47.743 } 00:13:47.743 ] 00:13:47.743 }' 00:13:47.743 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.743 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.002 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.002 [2024-11-26 20:26:41.549740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.261 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.262 "name": "Existed_Raid", 00:13:48.262 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:48.262 "strip_size_kb": 64, 00:13:48.262 "state": "configuring", 00:13:48.262 "raid_level": "raid0", 00:13:48.262 "superblock": true, 00:13:48.262 "num_base_bdevs": 4, 00:13:48.262 "num_base_bdevs_discovered": 2, 00:13:48.262 "num_base_bdevs_operational": 4, 00:13:48.262 "base_bdevs_list": [ 00:13:48.262 { 00:13:48.262 "name": null, 00:13:48.262 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:48.262 "is_configured": false, 00:13:48.262 "data_offset": 0, 00:13:48.262 "data_size": 63488 00:13:48.262 }, 00:13:48.262 { 00:13:48.262 "name": null, 00:13:48.262 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:48.262 "is_configured": false, 00:13:48.262 "data_offset": 0, 00:13:48.262 "data_size": 63488 00:13:48.262 }, 00:13:48.262 { 00:13:48.262 "name": "BaseBdev3", 00:13:48.262 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:48.262 "is_configured": true, 00:13:48.262 "data_offset": 2048, 00:13:48.262 "data_size": 63488 00:13:48.262 }, 00:13:48.262 { 00:13:48.262 "name": "BaseBdev4", 00:13:48.262 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:48.262 "is_configured": true, 00:13:48.262 "data_offset": 2048, 00:13:48.262 "data_size": 63488 00:13:48.262 } 00:13:48.262 ] 00:13:48.262 }' 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.262 20:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.829 [2024-11-26 20:26:42.147757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.829 "name": "Existed_Raid", 00:13:48.829 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:48.829 "strip_size_kb": 64, 00:13:48.829 "state": "configuring", 00:13:48.829 "raid_level": "raid0", 00:13:48.829 "superblock": true, 00:13:48.829 "num_base_bdevs": 4, 00:13:48.829 "num_base_bdevs_discovered": 3, 00:13:48.829 "num_base_bdevs_operational": 4, 00:13:48.829 "base_bdevs_list": [ 00:13:48.829 { 00:13:48.829 "name": null, 00:13:48.829 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:48.829 "is_configured": false, 00:13:48.829 "data_offset": 0, 00:13:48.829 "data_size": 63488 00:13:48.829 }, 00:13:48.829 { 00:13:48.829 "name": "BaseBdev2", 00:13:48.829 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:48.829 "is_configured": true, 00:13:48.829 "data_offset": 2048, 00:13:48.829 "data_size": 63488 00:13:48.829 }, 00:13:48.829 { 00:13:48.829 "name": "BaseBdev3", 00:13:48.829 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:48.829 "is_configured": true, 00:13:48.829 "data_offset": 2048, 00:13:48.829 "data_size": 63488 00:13:48.829 }, 00:13:48.829 { 00:13:48.829 "name": "BaseBdev4", 00:13:48.829 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:48.829 "is_configured": true, 00:13:48.829 "data_offset": 2048, 00:13:48.829 "data_size": 63488 00:13:48.829 } 00:13:48.829 ] 00:13:48.829 }' 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.829 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.088 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 036e9f08-884d-48ee-8b7d-583df2072008 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.346 [2024-11-26 20:26:42.726140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:49.346 [2024-11-26 20:26:42.726498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:49.346 [2024-11-26 20:26:42.726553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:49.346 [2024-11-26 20:26:42.726902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:49.346 [2024-11-26 20:26:42.727104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:49.346 [2024-11-26 20:26:42.727150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:13:49.346 id_bdev 0x617000008200 00:13:49.346 [2024-11-26 20:26:42.727366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:49.346 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.347 [ 00:13:49.347 { 00:13:49.347 "name": "NewBaseBdev", 00:13:49.347 "aliases": [ 00:13:49.347 "036e9f08-884d-48ee-8b7d-583df2072008" 00:13:49.347 ], 00:13:49.347 "product_name": "Malloc disk", 00:13:49.347 "block_size": 512, 00:13:49.347 "num_blocks": 65536, 00:13:49.347 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:49.347 "assigned_rate_limits": { 00:13:49.347 "rw_ios_per_sec": 0, 00:13:49.347 "rw_mbytes_per_sec": 0, 00:13:49.347 "r_mbytes_per_sec": 0, 00:13:49.347 "w_mbytes_per_sec": 0 00:13:49.347 }, 00:13:49.347 "claimed": true, 00:13:49.347 "claim_type": "exclusive_write", 00:13:49.347 "zoned": false, 00:13:49.347 "supported_io_types": { 00:13:49.347 "read": true, 00:13:49.347 "write": true, 00:13:49.347 "unmap": true, 00:13:49.347 "flush": true, 00:13:49.347 "reset": true, 00:13:49.347 "nvme_admin": false, 00:13:49.347 "nvme_io": false, 00:13:49.347 "nvme_io_md": false, 00:13:49.347 "write_zeroes": true, 00:13:49.347 "zcopy": true, 00:13:49.347 "get_zone_info": false, 00:13:49.347 "zone_management": false, 00:13:49.347 "zone_append": false, 00:13:49.347 "compare": false, 00:13:49.347 "compare_and_write": false, 00:13:49.347 "abort": true, 00:13:49.347 "seek_hole": false, 00:13:49.347 "seek_data": false, 00:13:49.347 "copy": true, 00:13:49.347 "nvme_iov_md": false 00:13:49.347 }, 00:13:49.347 "memory_domains": [ 00:13:49.347 { 00:13:49.347 "dma_device_id": "system", 00:13:49.347 "dma_device_type": 1 00:13:49.347 }, 00:13:49.347 { 00:13:49.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.347 "dma_device_type": 2 00:13:49.347 } 00:13:49.347 ], 00:13:49.347 "driver_specific": {} 00:13:49.347 } 00:13:49.347 ] 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.347 "name": "Existed_Raid", 00:13:49.347 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:49.347 "strip_size_kb": 64, 00:13:49.347 "state": "online", 00:13:49.347 "raid_level": "raid0", 00:13:49.347 "superblock": true, 00:13:49.347 "num_base_bdevs": 4, 00:13:49.347 "num_base_bdevs_discovered": 4, 00:13:49.347 "num_base_bdevs_operational": 4, 00:13:49.347 "base_bdevs_list": [ 00:13:49.347 { 00:13:49.347 "name": "NewBaseBdev", 00:13:49.347 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:49.347 "is_configured": true, 00:13:49.347 "data_offset": 2048, 00:13:49.347 "data_size": 63488 00:13:49.347 }, 00:13:49.347 { 00:13:49.347 "name": "BaseBdev2", 00:13:49.347 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:49.347 "is_configured": true, 00:13:49.347 "data_offset": 2048, 00:13:49.347 "data_size": 63488 00:13:49.347 }, 00:13:49.347 { 00:13:49.347 "name": "BaseBdev3", 00:13:49.347 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:49.347 "is_configured": true, 00:13:49.347 "data_offset": 2048, 00:13:49.347 "data_size": 63488 00:13:49.347 }, 00:13:49.347 { 00:13:49.347 "name": "BaseBdev4", 00:13:49.347 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:49.347 "is_configured": true, 00:13:49.347 "data_offset": 2048, 00:13:49.347 "data_size": 63488 00:13:49.347 } 00:13:49.347 ] 00:13:49.347 }' 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.347 20:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.914 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.914 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.914 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.914 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.915 [2024-11-26 20:26:43.217819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.915 "name": "Existed_Raid", 00:13:49.915 "aliases": [ 00:13:49.915 "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79" 00:13:49.915 ], 00:13:49.915 "product_name": "Raid Volume", 00:13:49.915 "block_size": 512, 00:13:49.915 "num_blocks": 253952, 00:13:49.915 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:49.915 "assigned_rate_limits": { 00:13:49.915 "rw_ios_per_sec": 0, 00:13:49.915 "rw_mbytes_per_sec": 0, 00:13:49.915 "r_mbytes_per_sec": 0, 00:13:49.915 "w_mbytes_per_sec": 0 00:13:49.915 }, 00:13:49.915 "claimed": false, 00:13:49.915 "zoned": false, 00:13:49.915 "supported_io_types": { 00:13:49.915 "read": true, 00:13:49.915 "write": true, 00:13:49.915 "unmap": true, 00:13:49.915 "flush": true, 00:13:49.915 "reset": true, 00:13:49.915 "nvme_admin": false, 00:13:49.915 "nvme_io": false, 00:13:49.915 "nvme_io_md": false, 00:13:49.915 "write_zeroes": true, 00:13:49.915 "zcopy": false, 00:13:49.915 "get_zone_info": false, 00:13:49.915 "zone_management": false, 00:13:49.915 "zone_append": false, 00:13:49.915 "compare": false, 00:13:49.915 "compare_and_write": false, 00:13:49.915 "abort": false, 00:13:49.915 "seek_hole": false, 00:13:49.915 "seek_data": false, 00:13:49.915 "copy": false, 00:13:49.915 "nvme_iov_md": false 00:13:49.915 }, 00:13:49.915 "memory_domains": [ 00:13:49.915 { 00:13:49.915 "dma_device_id": "system", 00:13:49.915 "dma_device_type": 1 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.915 "dma_device_type": 2 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "system", 00:13:49.915 "dma_device_type": 1 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.915 "dma_device_type": 2 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "system", 00:13:49.915 "dma_device_type": 1 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.915 "dma_device_type": 2 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "system", 00:13:49.915 "dma_device_type": 1 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.915 "dma_device_type": 2 00:13:49.915 } 00:13:49.915 ], 00:13:49.915 "driver_specific": { 00:13:49.915 "raid": { 00:13:49.915 "uuid": "cdf9d2ad-d577-45d1-93ac-efc02dbb8e79", 00:13:49.915 "strip_size_kb": 64, 00:13:49.915 "state": "online", 00:13:49.915 "raid_level": "raid0", 00:13:49.915 "superblock": true, 00:13:49.915 "num_base_bdevs": 4, 00:13:49.915 "num_base_bdevs_discovered": 4, 00:13:49.915 "num_base_bdevs_operational": 4, 00:13:49.915 "base_bdevs_list": [ 00:13:49.915 { 00:13:49.915 "name": "NewBaseBdev", 00:13:49.915 "uuid": "036e9f08-884d-48ee-8b7d-583df2072008", 00:13:49.915 "is_configured": true, 00:13:49.915 "data_offset": 2048, 00:13:49.915 "data_size": 63488 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "name": "BaseBdev2", 00:13:49.915 "uuid": "16347174-f6ee-4e33-96ee-687405f91854", 00:13:49.915 "is_configured": true, 00:13:49.915 "data_offset": 2048, 00:13:49.915 "data_size": 63488 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "name": "BaseBdev3", 00:13:49.915 "uuid": "8007f262-7e78-4e4f-9573-59e6ab3ab7a5", 00:13:49.915 "is_configured": true, 00:13:49.915 "data_offset": 2048, 00:13:49.915 "data_size": 63488 00:13:49.915 }, 00:13:49.915 { 00:13:49.915 "name": "BaseBdev4", 00:13:49.915 "uuid": "4fc90ff3-6e13-4471-8c55-2f5d3540a375", 00:13:49.915 "is_configured": true, 00:13:49.915 "data_offset": 2048, 00:13:49.915 "data_size": 63488 00:13:49.915 } 00:13:49.915 ] 00:13:49.915 } 00:13:49.915 } 00:13:49.915 }' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.915 BaseBdev2 00:13:49.915 BaseBdev3 00:13:49.915 BaseBdev4' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.915 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.175 [2024-11-26 20:26:43.528849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.175 [2024-11-26 20:26:43.528884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.175 [2024-11-26 20:26:43.528975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.175 [2024-11-26 20:26:43.529055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.175 [2024-11-26 20:26:43.529068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70366 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70366 ']' 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70366 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70366 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.175 killing process with pid 70366 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70366' 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70366 00:13:50.175 [2024-11-26 20:26:43.576866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.175 20:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70366 00:13:50.743 [2024-11-26 20:26:44.029372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.120 20:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:52.120 00:13:52.120 real 0m12.252s 00:13:52.120 user 0m19.455s 00:13:52.120 sys 0m2.102s 00:13:52.120 20:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.120 ************************************ 00:13:52.120 END TEST raid_state_function_test_sb 00:13:52.120 ************************************ 00:13:52.120 20:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.120 20:26:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:52.120 20:26:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:52.120 20:26:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.120 20:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.120 ************************************ 00:13:52.120 START TEST raid_superblock_test 00:13:52.120 ************************************ 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71044 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71044 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71044 ']' 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.120 20:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.120 [2024-11-26 20:26:45.419402] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:52.120 [2024-11-26 20:26:45.419630] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71044 ] 00:13:52.120 [2024-11-26 20:26:45.595690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.379 [2024-11-26 20:26:45.719874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.639 [2024-11-26 20:26:45.942918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.639 [2024-11-26 20:26:45.943050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.900 malloc1 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.900 [2024-11-26 20:26:46.335134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:52.900 [2024-11-26 20:26:46.335199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.900 [2024-11-26 20:26:46.335223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.900 [2024-11-26 20:26:46.335233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.900 [2024-11-26 20:26:46.337562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.900 [2024-11-26 20:26:46.337602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:52.900 pt1 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.900 malloc2 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.900 [2024-11-26 20:26:46.391998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.900 [2024-11-26 20:26:46.392116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.900 [2024-11-26 20:26:46.392185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.900 [2024-11-26 20:26:46.392229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.900 [2024-11-26 20:26:46.394544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.900 [2024-11-26 20:26:46.394626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.900 pt2 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.900 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.160 malloc3 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.160 [2024-11-26 20:26:46.465216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:53.160 [2024-11-26 20:26:46.465373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.160 [2024-11-26 20:26:46.465439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:53.160 [2024-11-26 20:26:46.465482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.160 [2024-11-26 20:26:46.468011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.160 [2024-11-26 20:26:46.468133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:53.160 pt3 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.160 malloc4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.160 [2024-11-26 20:26:46.526703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:53.160 [2024-11-26 20:26:46.526765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.160 [2024-11-26 20:26:46.526788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:53.160 [2024-11-26 20:26:46.526797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.160 [2024-11-26 20:26:46.529150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.160 [2024-11-26 20:26:46.529189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:53.160 pt4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.160 [2024-11-26 20:26:46.538727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.160 [2024-11-26 20:26:46.540648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:53.160 [2024-11-26 20:26:46.540830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:53.160 [2024-11-26 20:26:46.540905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:53.160 [2024-11-26 20:26:46.541146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:53.160 [2024-11-26 20:26:46.541162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:53.160 [2024-11-26 20:26:46.541518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:53.160 [2024-11-26 20:26:46.541728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:53.160 [2024-11-26 20:26:46.541744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:53.160 [2024-11-26 20:26:46.541932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.160 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.161 "name": "raid_bdev1", 00:13:53.161 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:53.161 "strip_size_kb": 64, 00:13:53.161 "state": "online", 00:13:53.161 "raid_level": "raid0", 00:13:53.161 "superblock": true, 00:13:53.161 "num_base_bdevs": 4, 00:13:53.161 "num_base_bdevs_discovered": 4, 00:13:53.161 "num_base_bdevs_operational": 4, 00:13:53.161 "base_bdevs_list": [ 00:13:53.161 { 00:13:53.161 "name": "pt1", 00:13:53.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.161 "is_configured": true, 00:13:53.161 "data_offset": 2048, 00:13:53.161 "data_size": 63488 00:13:53.161 }, 00:13:53.161 { 00:13:53.161 "name": "pt2", 00:13:53.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.161 "is_configured": true, 00:13:53.161 "data_offset": 2048, 00:13:53.161 "data_size": 63488 00:13:53.161 }, 00:13:53.161 { 00:13:53.161 "name": "pt3", 00:13:53.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.161 "is_configured": true, 00:13:53.161 "data_offset": 2048, 00:13:53.161 "data_size": 63488 00:13:53.161 }, 00:13:53.161 { 00:13:53.161 "name": "pt4", 00:13:53.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:53.161 "is_configured": true, 00:13:53.161 "data_offset": 2048, 00:13:53.161 "data_size": 63488 00:13:53.161 } 00:13:53.161 ] 00:13:53.161 }' 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.161 20:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.730 [2024-11-26 20:26:47.050245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.730 "name": "raid_bdev1", 00:13:53.730 "aliases": [ 00:13:53.730 "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3" 00:13:53.730 ], 00:13:53.730 "product_name": "Raid Volume", 00:13:53.730 "block_size": 512, 00:13:53.730 "num_blocks": 253952, 00:13:53.730 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:53.730 "assigned_rate_limits": { 00:13:53.730 "rw_ios_per_sec": 0, 00:13:53.730 "rw_mbytes_per_sec": 0, 00:13:53.730 "r_mbytes_per_sec": 0, 00:13:53.730 "w_mbytes_per_sec": 0 00:13:53.730 }, 00:13:53.730 "claimed": false, 00:13:53.730 "zoned": false, 00:13:53.730 "supported_io_types": { 00:13:53.730 "read": true, 00:13:53.730 "write": true, 00:13:53.730 "unmap": true, 00:13:53.730 "flush": true, 00:13:53.730 "reset": true, 00:13:53.730 "nvme_admin": false, 00:13:53.730 "nvme_io": false, 00:13:53.730 "nvme_io_md": false, 00:13:53.730 "write_zeroes": true, 00:13:53.730 "zcopy": false, 00:13:53.730 "get_zone_info": false, 00:13:53.730 "zone_management": false, 00:13:53.730 "zone_append": false, 00:13:53.730 "compare": false, 00:13:53.730 "compare_and_write": false, 00:13:53.730 "abort": false, 00:13:53.730 "seek_hole": false, 00:13:53.730 "seek_data": false, 00:13:53.730 "copy": false, 00:13:53.730 "nvme_iov_md": false 00:13:53.730 }, 00:13:53.730 "memory_domains": [ 00:13:53.730 { 00:13:53.730 "dma_device_id": "system", 00:13:53.730 "dma_device_type": 1 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.730 "dma_device_type": 2 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "system", 00:13:53.730 "dma_device_type": 1 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.730 "dma_device_type": 2 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "system", 00:13:53.730 "dma_device_type": 1 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.730 "dma_device_type": 2 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "system", 00:13:53.730 "dma_device_type": 1 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.730 "dma_device_type": 2 00:13:53.730 } 00:13:53.730 ], 00:13:53.730 "driver_specific": { 00:13:53.730 "raid": { 00:13:53.730 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:53.730 "strip_size_kb": 64, 00:13:53.730 "state": "online", 00:13:53.730 "raid_level": "raid0", 00:13:53.730 "superblock": true, 00:13:53.730 "num_base_bdevs": 4, 00:13:53.730 "num_base_bdevs_discovered": 4, 00:13:53.730 "num_base_bdevs_operational": 4, 00:13:53.730 "base_bdevs_list": [ 00:13:53.730 { 00:13:53.730 "name": "pt1", 00:13:53.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.730 "is_configured": true, 00:13:53.730 "data_offset": 2048, 00:13:53.730 "data_size": 63488 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "name": "pt2", 00:13:53.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.730 "is_configured": true, 00:13:53.730 "data_offset": 2048, 00:13:53.730 "data_size": 63488 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "name": "pt3", 00:13:53.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.730 "is_configured": true, 00:13:53.730 "data_offset": 2048, 00:13:53.730 "data_size": 63488 00:13:53.730 }, 00:13:53.730 { 00:13:53.730 "name": "pt4", 00:13:53.730 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:53.730 "is_configured": true, 00:13:53.730 "data_offset": 2048, 00:13:53.730 "data_size": 63488 00:13:53.730 } 00:13:53.730 ] 00:13:53.730 } 00:13:53.730 } 00:13:53.730 }' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:53.730 pt2 00:13:53.730 pt3 00:13:53.730 pt4' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.730 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.989 [2024-11-26 20:26:47.401727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=daa0a219-c4aa-4689-8fe2-2cce44c8f9c3 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z daa0a219-c4aa-4689-8fe2-2cce44c8f9c3 ']' 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.989 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.990 [2024-11-26 20:26:47.437295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.990 [2024-11-26 20:26:47.437379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.990 [2024-11-26 20:26:47.437506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.990 [2024-11-26 20:26:47.437618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.990 [2024-11-26 20:26:47.437678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.990 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 [2024-11-26 20:26:47.589052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:54.248 [2024-11-26 20:26:47.591277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:54.248 [2024-11-26 20:26:47.591396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:54.248 [2024-11-26 20:26:47.591520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:54.248 [2024-11-26 20:26:47.591590] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:54.248 [2024-11-26 20:26:47.591652] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:54.248 [2024-11-26 20:26:47.591678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:54.248 [2024-11-26 20:26:47.591703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:54.248 [2024-11-26 20:26:47.591721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.248 [2024-11-26 20:26:47.591738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:54.248 request: 00:13:54.248 { 00:13:54.248 "name": "raid_bdev1", 00:13:54.248 "raid_level": "raid0", 00:13:54.248 "base_bdevs": [ 00:13:54.248 "malloc1", 00:13:54.248 "malloc2", 00:13:54.248 "malloc3", 00:13:54.248 "malloc4" 00:13:54.248 ], 00:13:54.248 "strip_size_kb": 64, 00:13:54.248 "superblock": false, 00:13:54.248 "method": "bdev_raid_create", 00:13:54.248 "req_id": 1 00:13:54.248 } 00:13:54.248 Got JSON-RPC error response 00:13:54.248 response: 00:13:54.248 { 00:13:54.248 "code": -17, 00:13:54.248 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:54.248 } 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 [2024-11-26 20:26:47.652901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:54.248 [2024-11-26 20:26:47.653026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.248 [2024-11-26 20:26:47.653077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:54.248 [2024-11-26 20:26:47.653112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.248 [2024-11-26 20:26:47.655632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.248 [2024-11-26 20:26:47.655727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:54.248 [2024-11-26 20:26:47.655886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:54.248 [2024-11-26 20:26:47.656008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:54.248 pt1 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.248 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.248 "name": "raid_bdev1", 00:13:54.248 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:54.248 "strip_size_kb": 64, 00:13:54.248 "state": "configuring", 00:13:54.248 "raid_level": "raid0", 00:13:54.248 "superblock": true, 00:13:54.248 "num_base_bdevs": 4, 00:13:54.248 "num_base_bdevs_discovered": 1, 00:13:54.248 "num_base_bdevs_operational": 4, 00:13:54.248 "base_bdevs_list": [ 00:13:54.248 { 00:13:54.248 "name": "pt1", 00:13:54.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.248 "is_configured": true, 00:13:54.248 "data_offset": 2048, 00:13:54.248 "data_size": 63488 00:13:54.248 }, 00:13:54.248 { 00:13:54.248 "name": null, 00:13:54.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.248 "is_configured": false, 00:13:54.248 "data_offset": 2048, 00:13:54.249 "data_size": 63488 00:13:54.249 }, 00:13:54.249 { 00:13:54.249 "name": null, 00:13:54.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.249 "is_configured": false, 00:13:54.249 "data_offset": 2048, 00:13:54.249 "data_size": 63488 00:13:54.249 }, 00:13:54.249 { 00:13:54.249 "name": null, 00:13:54.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:54.249 "is_configured": false, 00:13:54.249 "data_offset": 2048, 00:13:54.249 "data_size": 63488 00:13:54.249 } 00:13:54.249 ] 00:13:54.249 }' 00:13:54.249 20:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.249 20:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 [2024-11-26 20:26:48.164069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.822 [2024-11-26 20:26:48.164154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.822 [2024-11-26 20:26:48.164179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:54.822 [2024-11-26 20:26:48.164192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.822 [2024-11-26 20:26:48.164719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.822 [2024-11-26 20:26:48.164751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.822 [2024-11-26 20:26:48.164851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.822 [2024-11-26 20:26:48.164880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.822 pt2 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 [2024-11-26 20:26:48.176067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:54.822 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.823 "name": "raid_bdev1", 00:13:54.823 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:54.823 "strip_size_kb": 64, 00:13:54.823 "state": "configuring", 00:13:54.823 "raid_level": "raid0", 00:13:54.823 "superblock": true, 00:13:54.823 "num_base_bdevs": 4, 00:13:54.823 "num_base_bdevs_discovered": 1, 00:13:54.823 "num_base_bdevs_operational": 4, 00:13:54.823 "base_bdevs_list": [ 00:13:54.823 { 00:13:54.823 "name": "pt1", 00:13:54.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.823 "is_configured": true, 00:13:54.823 "data_offset": 2048, 00:13:54.823 "data_size": 63488 00:13:54.823 }, 00:13:54.823 { 00:13:54.823 "name": null, 00:13:54.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.823 "is_configured": false, 00:13:54.823 "data_offset": 0, 00:13:54.823 "data_size": 63488 00:13:54.823 }, 00:13:54.823 { 00:13:54.823 "name": null, 00:13:54.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.823 "is_configured": false, 00:13:54.823 "data_offset": 2048, 00:13:54.823 "data_size": 63488 00:13:54.823 }, 00:13:54.823 { 00:13:54.823 "name": null, 00:13:54.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:54.823 "is_configured": false, 00:13:54.823 "data_offset": 2048, 00:13:54.823 "data_size": 63488 00:13:54.823 } 00:13:54.823 ] 00:13:54.823 }' 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.823 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.400 [2024-11-26 20:26:48.651260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:55.400 [2024-11-26 20:26:48.651407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.400 [2024-11-26 20:26:48.651434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:55.400 [2024-11-26 20:26:48.651445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.400 [2024-11-26 20:26:48.651938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.400 [2024-11-26 20:26:48.651965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:55.400 [2024-11-26 20:26:48.652061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:55.400 [2024-11-26 20:26:48.652085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:55.400 pt2 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.400 [2024-11-26 20:26:48.663213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:55.400 [2024-11-26 20:26:48.663345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.400 [2024-11-26 20:26:48.663373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:55.400 [2024-11-26 20:26:48.663383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.400 [2024-11-26 20:26:48.663914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.400 [2024-11-26 20:26:48.663941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:55.400 [2024-11-26 20:26:48.664034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:55.400 [2024-11-26 20:26:48.664066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:55.400 pt3 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.400 [2024-11-26 20:26:48.675162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:55.400 [2024-11-26 20:26:48.675220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.400 [2024-11-26 20:26:48.675254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:55.400 [2024-11-26 20:26:48.675264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.400 [2024-11-26 20:26:48.675792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.400 [2024-11-26 20:26:48.675824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:55.400 [2024-11-26 20:26:48.675909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:55.400 [2024-11-26 20:26:48.675936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:55.400 [2024-11-26 20:26:48.676109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:55.400 [2024-11-26 20:26:48.676123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:55.400 [2024-11-26 20:26:48.676396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:55.400 [2024-11-26 20:26:48.676566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:55.400 [2024-11-26 20:26:48.676594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:55.400 [2024-11-26 20:26:48.676741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.400 pt4 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:55.400 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.401 "name": "raid_bdev1", 00:13:55.401 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:55.401 "strip_size_kb": 64, 00:13:55.401 "state": "online", 00:13:55.401 "raid_level": "raid0", 00:13:55.401 "superblock": true, 00:13:55.401 "num_base_bdevs": 4, 00:13:55.401 "num_base_bdevs_discovered": 4, 00:13:55.401 "num_base_bdevs_operational": 4, 00:13:55.401 "base_bdevs_list": [ 00:13:55.401 { 00:13:55.401 "name": "pt1", 00:13:55.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.401 "is_configured": true, 00:13:55.401 "data_offset": 2048, 00:13:55.401 "data_size": 63488 00:13:55.401 }, 00:13:55.401 { 00:13:55.401 "name": "pt2", 00:13:55.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.401 "is_configured": true, 00:13:55.401 "data_offset": 2048, 00:13:55.401 "data_size": 63488 00:13:55.401 }, 00:13:55.401 { 00:13:55.401 "name": "pt3", 00:13:55.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.401 "is_configured": true, 00:13:55.401 "data_offset": 2048, 00:13:55.401 "data_size": 63488 00:13:55.401 }, 00:13:55.401 { 00:13:55.401 "name": "pt4", 00:13:55.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:55.401 "is_configured": true, 00:13:55.401 "data_offset": 2048, 00:13:55.401 "data_size": 63488 00:13:55.401 } 00:13:55.401 ] 00:13:55.401 }' 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.401 20:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.661 [2024-11-26 20:26:49.102824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.661 "name": "raid_bdev1", 00:13:55.661 "aliases": [ 00:13:55.661 "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3" 00:13:55.661 ], 00:13:55.661 "product_name": "Raid Volume", 00:13:55.661 "block_size": 512, 00:13:55.661 "num_blocks": 253952, 00:13:55.661 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:55.661 "assigned_rate_limits": { 00:13:55.661 "rw_ios_per_sec": 0, 00:13:55.661 "rw_mbytes_per_sec": 0, 00:13:55.661 "r_mbytes_per_sec": 0, 00:13:55.661 "w_mbytes_per_sec": 0 00:13:55.661 }, 00:13:55.661 "claimed": false, 00:13:55.661 "zoned": false, 00:13:55.661 "supported_io_types": { 00:13:55.661 "read": true, 00:13:55.661 "write": true, 00:13:55.661 "unmap": true, 00:13:55.661 "flush": true, 00:13:55.661 "reset": true, 00:13:55.661 "nvme_admin": false, 00:13:55.661 "nvme_io": false, 00:13:55.661 "nvme_io_md": false, 00:13:55.661 "write_zeroes": true, 00:13:55.661 "zcopy": false, 00:13:55.661 "get_zone_info": false, 00:13:55.661 "zone_management": false, 00:13:55.661 "zone_append": false, 00:13:55.661 "compare": false, 00:13:55.661 "compare_and_write": false, 00:13:55.661 "abort": false, 00:13:55.661 "seek_hole": false, 00:13:55.661 "seek_data": false, 00:13:55.661 "copy": false, 00:13:55.661 "nvme_iov_md": false 00:13:55.661 }, 00:13:55.661 "memory_domains": [ 00:13:55.661 { 00:13:55.661 "dma_device_id": "system", 00:13:55.661 "dma_device_type": 1 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.661 "dma_device_type": 2 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "system", 00:13:55.661 "dma_device_type": 1 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.661 "dma_device_type": 2 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "system", 00:13:55.661 "dma_device_type": 1 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.661 "dma_device_type": 2 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "system", 00:13:55.661 "dma_device_type": 1 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.661 "dma_device_type": 2 00:13:55.661 } 00:13:55.661 ], 00:13:55.661 "driver_specific": { 00:13:55.661 "raid": { 00:13:55.661 "uuid": "daa0a219-c4aa-4689-8fe2-2cce44c8f9c3", 00:13:55.661 "strip_size_kb": 64, 00:13:55.661 "state": "online", 00:13:55.661 "raid_level": "raid0", 00:13:55.661 "superblock": true, 00:13:55.661 "num_base_bdevs": 4, 00:13:55.661 "num_base_bdevs_discovered": 4, 00:13:55.661 "num_base_bdevs_operational": 4, 00:13:55.661 "base_bdevs_list": [ 00:13:55.661 { 00:13:55.661 "name": "pt1", 00:13:55.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "name": "pt2", 00:13:55.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "name": "pt3", 00:13:55.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "name": "pt4", 00:13:55.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 } 00:13:55.661 ] 00:13:55.661 } 00:13:55.661 } 00:13:55.661 }' 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:55.661 pt2 00:13:55.661 pt3 00:13:55.661 pt4' 00:13:55.661 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.919 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.919 [2024-11-26 20:26:49.466170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' daa0a219-c4aa-4689-8fe2-2cce44c8f9c3 '!=' daa0a219-c4aa-4689-8fe2-2cce44c8f9c3 ']' 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71044 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71044 ']' 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71044 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71044 00:13:56.178 killing process with pid 71044 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71044' 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71044 00:13:56.178 [2024-11-26 20:26:49.533788] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.178 [2024-11-26 20:26:49.533893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.178 20:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71044 00:13:56.178 [2024-11-26 20:26:49.533978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.178 [2024-11-26 20:26:49.533989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:56.744 [2024-11-26 20:26:50.012254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.116 20:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:58.116 00:13:58.116 real 0m6.037s 00:13:58.116 user 0m8.563s 00:13:58.116 sys 0m1.013s 00:13:58.116 20:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.116 20:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.116 ************************************ 00:13:58.116 END TEST raid_superblock_test 00:13:58.116 ************************************ 00:13:58.116 20:26:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:58.116 20:26:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:58.116 20:26:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.116 20:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.116 ************************************ 00:13:58.116 START TEST raid_read_error_test 00:13:58.116 ************************************ 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:58.116 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xmsz56ohJd 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71315 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71315 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71315 ']' 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.117 20:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.117 [2024-11-26 20:26:51.532897] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:13:58.117 [2024-11-26 20:26:51.533036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71315 ] 00:13:58.376 [2024-11-26 20:26:51.695723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.376 [2024-11-26 20:26:51.825040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.634 [2024-11-26 20:26:52.046722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.634 [2024-11-26 20:26:52.046787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.927 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.927 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:58.927 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.927 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:58.927 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.927 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.185 BaseBdev1_malloc 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.185 true 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.185 [2024-11-26 20:26:52.519012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:59.185 [2024-11-26 20:26:52.519078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.185 [2024-11-26 20:26:52.519103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:59.185 [2024-11-26 20:26:52.519116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.185 [2024-11-26 20:26:52.521598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.185 [2024-11-26 20:26:52.521647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.185 BaseBdev1 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.185 BaseBdev2_malloc 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.185 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 true 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 [2024-11-26 20:26:52.591503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:59.186 [2024-11-26 20:26:52.591568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.186 [2024-11-26 20:26:52.591588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:59.186 [2024-11-26 20:26:52.591602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.186 [2024-11-26 20:26:52.594036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.186 [2024-11-26 20:26:52.594133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.186 BaseBdev2 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 BaseBdev3_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 true 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 [2024-11-26 20:26:52.675772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:59.186 [2024-11-26 20:26:52.675832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.186 [2024-11-26 20:26:52.675853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:59.186 [2024-11-26 20:26:52.675865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.186 [2024-11-26 20:26:52.678290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.186 [2024-11-26 20:26:52.678333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.186 BaseBdev3 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 BaseBdev4_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.186 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.444 true 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.444 [2024-11-26 20:26:52.747078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:59.444 [2024-11-26 20:26:52.747164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.444 [2024-11-26 20:26:52.747198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:59.444 [2024-11-26 20:26:52.747217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.444 [2024-11-26 20:26:52.749787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.444 [2024-11-26 20:26:52.749839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:59.444 BaseBdev4 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.444 [2024-11-26 20:26:52.759091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.444 [2024-11-26 20:26:52.761237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.444 [2024-11-26 20:26:52.761341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.444 [2024-11-26 20:26:52.761414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.444 [2024-11-26 20:26:52.761672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:59.444 [2024-11-26 20:26:52.761691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:59.444 [2024-11-26 20:26:52.761981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:59.444 [2024-11-26 20:26:52.762166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:59.444 [2024-11-26 20:26:52.762184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:59.444 [2024-11-26 20:26:52.762405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.444 "name": "raid_bdev1", 00:13:59.444 "uuid": "fba1a72c-9463-4341-b583-a2a5154175a7", 00:13:59.444 "strip_size_kb": 64, 00:13:59.444 "state": "online", 00:13:59.444 "raid_level": "raid0", 00:13:59.444 "superblock": true, 00:13:59.444 "num_base_bdevs": 4, 00:13:59.444 "num_base_bdevs_discovered": 4, 00:13:59.444 "num_base_bdevs_operational": 4, 00:13:59.444 "base_bdevs_list": [ 00:13:59.444 { 00:13:59.444 "name": "BaseBdev1", 00:13:59.444 "uuid": "723c690b-2b2c-5ff2-976a-0d443ada6f97", 00:13:59.444 "is_configured": true, 00:13:59.444 "data_offset": 2048, 00:13:59.444 "data_size": 63488 00:13:59.444 }, 00:13:59.444 { 00:13:59.444 "name": "BaseBdev2", 00:13:59.444 "uuid": "37fd74a5-5e33-50ce-8c21-169e3deb8674", 00:13:59.444 "is_configured": true, 00:13:59.444 "data_offset": 2048, 00:13:59.444 "data_size": 63488 00:13:59.444 }, 00:13:59.444 { 00:13:59.444 "name": "BaseBdev3", 00:13:59.444 "uuid": "e6852d62-14ce-5a93-833d-06232ed5cb4a", 00:13:59.444 "is_configured": true, 00:13:59.444 "data_offset": 2048, 00:13:59.444 "data_size": 63488 00:13:59.444 }, 00:13:59.444 { 00:13:59.444 "name": "BaseBdev4", 00:13:59.444 "uuid": "b527599f-c715-5654-a1fd-720be4819eb2", 00:13:59.444 "is_configured": true, 00:13:59.444 "data_offset": 2048, 00:13:59.444 "data_size": 63488 00:13:59.444 } 00:13:59.444 ] 00:13:59.444 }' 00:13:59.444 20:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.445 20:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.703 20:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:59.703 20:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:59.961 [2024-11-26 20:26:53.343767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.899 "name": "raid_bdev1", 00:14:00.899 "uuid": "fba1a72c-9463-4341-b583-a2a5154175a7", 00:14:00.899 "strip_size_kb": 64, 00:14:00.899 "state": "online", 00:14:00.899 "raid_level": "raid0", 00:14:00.899 "superblock": true, 00:14:00.899 "num_base_bdevs": 4, 00:14:00.899 "num_base_bdevs_discovered": 4, 00:14:00.899 "num_base_bdevs_operational": 4, 00:14:00.899 "base_bdevs_list": [ 00:14:00.899 { 00:14:00.899 "name": "BaseBdev1", 00:14:00.899 "uuid": "723c690b-2b2c-5ff2-976a-0d443ada6f97", 00:14:00.899 "is_configured": true, 00:14:00.899 "data_offset": 2048, 00:14:00.899 "data_size": 63488 00:14:00.899 }, 00:14:00.899 { 00:14:00.899 "name": "BaseBdev2", 00:14:00.899 "uuid": "37fd74a5-5e33-50ce-8c21-169e3deb8674", 00:14:00.899 "is_configured": true, 00:14:00.899 "data_offset": 2048, 00:14:00.899 "data_size": 63488 00:14:00.899 }, 00:14:00.899 { 00:14:00.899 "name": "BaseBdev3", 00:14:00.899 "uuid": "e6852d62-14ce-5a93-833d-06232ed5cb4a", 00:14:00.899 "is_configured": true, 00:14:00.899 "data_offset": 2048, 00:14:00.899 "data_size": 63488 00:14:00.899 }, 00:14:00.899 { 00:14:00.899 "name": "BaseBdev4", 00:14:00.899 "uuid": "b527599f-c715-5654-a1fd-720be4819eb2", 00:14:00.899 "is_configured": true, 00:14:00.899 "data_offset": 2048, 00:14:00.899 "data_size": 63488 00:14:00.899 } 00:14:00.899 ] 00:14:00.899 }' 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.899 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.465 [2024-11-26 20:26:54.753539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.465 [2024-11-26 20:26:54.753649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.465 [2024-11-26 20:26:54.756737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.465 [2024-11-26 20:26:54.756849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.465 [2024-11-26 20:26:54.756929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.465 [2024-11-26 20:26:54.756985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:01.465 { 00:14:01.465 "results": [ 00:14:01.465 { 00:14:01.465 "job": "raid_bdev1", 00:14:01.465 "core_mask": "0x1", 00:14:01.465 "workload": "randrw", 00:14:01.465 "percentage": 50, 00:14:01.465 "status": "finished", 00:14:01.465 "queue_depth": 1, 00:14:01.465 "io_size": 131072, 00:14:01.465 "runtime": 1.410243, 00:14:01.465 "iops": 14113.170567058301, 00:14:01.465 "mibps": 1764.1463208822877, 00:14:01.465 "io_failed": 1, 00:14:01.465 "io_timeout": 0, 00:14:01.465 "avg_latency_us": 98.28804111262444, 00:14:01.465 "min_latency_us": 27.388646288209607, 00:14:01.465 "max_latency_us": 1767.1825327510917 00:14:01.465 } 00:14:01.465 ], 00:14:01.465 "core_count": 1 00:14:01.465 } 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71315 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71315 ']' 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71315 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71315 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71315' 00:14:01.465 killing process with pid 71315 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71315 00:14:01.465 [2024-11-26 20:26:54.800957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.465 20:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71315 00:14:01.722 [2024-11-26 20:26:55.160470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xmsz56ohJd 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:03.193 00:14:03.193 real 0m5.141s 00:14:03.193 user 0m6.121s 00:14:03.193 sys 0m0.626s 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.193 ************************************ 00:14:03.193 END TEST raid_read_error_test 00:14:03.193 ************************************ 00:14:03.193 20:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.193 20:26:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:03.193 20:26:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:03.193 20:26:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.193 20:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.193 ************************************ 00:14:03.193 START TEST raid_write_error_test 00:14:03.193 ************************************ 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WCVlkuHv4C 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71461 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71461 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71461 ']' 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.193 20:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.193 [2024-11-26 20:26:56.743742] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:03.193 [2024-11-26 20:26:56.743973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71461 ] 00:14:03.453 [2024-11-26 20:26:56.926071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.712 [2024-11-26 20:26:57.052993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.971 [2024-11-26 20:26:57.277245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.971 [2024-11-26 20:26:57.277326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.231 BaseBdev1_malloc 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.231 true 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.231 [2024-11-26 20:26:57.696500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:04.231 [2024-11-26 20:26:57.696642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.231 [2024-11-26 20:26:57.696692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:04.231 [2024-11-26 20:26:57.696736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.231 [2024-11-26 20:26:57.699170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.231 [2024-11-26 20:26:57.699284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.231 BaseBdev1 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.231 BaseBdev2_malloc 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:04.231 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.232 true 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.232 [2024-11-26 20:26:57.764999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:04.232 [2024-11-26 20:26:57.765062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.232 [2024-11-26 20:26:57.765083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:04.232 [2024-11-26 20:26:57.765095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.232 [2024-11-26 20:26:57.767525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.232 [2024-11-26 20:26:57.767632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:04.232 BaseBdev2 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.232 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.491 BaseBdev3_malloc 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.491 true 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.491 [2024-11-26 20:26:57.848914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:04.491 [2024-11-26 20:26:57.848991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.491 [2024-11-26 20:26:57.849037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:04.491 [2024-11-26 20:26:57.849050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.491 [2024-11-26 20:26:57.851542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.491 [2024-11-26 20:26:57.851584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:04.491 BaseBdev3 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.491 BaseBdev4_malloc 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.491 true 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.491 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.491 [2024-11-26 20:26:57.921285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:04.492 [2024-11-26 20:26:57.921348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.492 [2024-11-26 20:26:57.921388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:04.492 [2024-11-26 20:26:57.921400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.492 [2024-11-26 20:26:57.923903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.492 [2024-11-26 20:26:57.924005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:04.492 BaseBdev4 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.492 [2024-11-26 20:26:57.933359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.492 [2024-11-26 20:26:57.935532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.492 [2024-11-26 20:26:57.935615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.492 [2024-11-26 20:26:57.935684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.492 [2024-11-26 20:26:57.935975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:04.492 [2024-11-26 20:26:57.936000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:04.492 [2024-11-26 20:26:57.936316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:04.492 [2024-11-26 20:26:57.936525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:04.492 [2024-11-26 20:26:57.936543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:04.492 [2024-11-26 20:26:57.936761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.492 "name": "raid_bdev1", 00:14:04.492 "uuid": "8d1707e0-ec8e-4c13-a1c9-9ad08c8d2771", 00:14:04.492 "strip_size_kb": 64, 00:14:04.492 "state": "online", 00:14:04.492 "raid_level": "raid0", 00:14:04.492 "superblock": true, 00:14:04.492 "num_base_bdevs": 4, 00:14:04.492 "num_base_bdevs_discovered": 4, 00:14:04.492 "num_base_bdevs_operational": 4, 00:14:04.492 "base_bdevs_list": [ 00:14:04.492 { 00:14:04.492 "name": "BaseBdev1", 00:14:04.492 "uuid": "eea098c3-c109-5b09-8fb6-3f46f8345897", 00:14:04.492 "is_configured": true, 00:14:04.492 "data_offset": 2048, 00:14:04.492 "data_size": 63488 00:14:04.492 }, 00:14:04.492 { 00:14:04.492 "name": "BaseBdev2", 00:14:04.492 "uuid": "99d31329-1fa3-5969-ad03-5e23d244ee92", 00:14:04.492 "is_configured": true, 00:14:04.492 "data_offset": 2048, 00:14:04.492 "data_size": 63488 00:14:04.492 }, 00:14:04.492 { 00:14:04.492 "name": "BaseBdev3", 00:14:04.492 "uuid": "54fd8005-92e1-53b9-943f-560fe5ccb2b8", 00:14:04.492 "is_configured": true, 00:14:04.492 "data_offset": 2048, 00:14:04.492 "data_size": 63488 00:14:04.492 }, 00:14:04.492 { 00:14:04.492 "name": "BaseBdev4", 00:14:04.492 "uuid": "def7ddfd-74a8-5133-8493-6e4f23c303bd", 00:14:04.492 "is_configured": true, 00:14:04.492 "data_offset": 2048, 00:14:04.492 "data_size": 63488 00:14:04.492 } 00:14:04.492 ] 00:14:04.492 }' 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.492 20:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.060 20:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:05.060 20:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.060 [2024-11-26 20:26:58.549943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.004 "name": "raid_bdev1", 00:14:06.004 "uuid": "8d1707e0-ec8e-4c13-a1c9-9ad08c8d2771", 00:14:06.004 "strip_size_kb": 64, 00:14:06.004 "state": "online", 00:14:06.004 "raid_level": "raid0", 00:14:06.004 "superblock": true, 00:14:06.004 "num_base_bdevs": 4, 00:14:06.004 "num_base_bdevs_discovered": 4, 00:14:06.004 "num_base_bdevs_operational": 4, 00:14:06.004 "base_bdevs_list": [ 00:14:06.004 { 00:14:06.004 "name": "BaseBdev1", 00:14:06.004 "uuid": "eea098c3-c109-5b09-8fb6-3f46f8345897", 00:14:06.004 "is_configured": true, 00:14:06.004 "data_offset": 2048, 00:14:06.004 "data_size": 63488 00:14:06.004 }, 00:14:06.004 { 00:14:06.004 "name": "BaseBdev2", 00:14:06.004 "uuid": "99d31329-1fa3-5969-ad03-5e23d244ee92", 00:14:06.004 "is_configured": true, 00:14:06.004 "data_offset": 2048, 00:14:06.004 "data_size": 63488 00:14:06.004 }, 00:14:06.004 { 00:14:06.004 "name": "BaseBdev3", 00:14:06.004 "uuid": "54fd8005-92e1-53b9-943f-560fe5ccb2b8", 00:14:06.004 "is_configured": true, 00:14:06.004 "data_offset": 2048, 00:14:06.004 "data_size": 63488 00:14:06.004 }, 00:14:06.004 { 00:14:06.004 "name": "BaseBdev4", 00:14:06.004 "uuid": "def7ddfd-74a8-5133-8493-6e4f23c303bd", 00:14:06.004 "is_configured": true, 00:14:06.004 "data_offset": 2048, 00:14:06.004 "data_size": 63488 00:14:06.004 } 00:14:06.004 ] 00:14:06.004 }' 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.004 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 [2024-11-26 20:26:59.855460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.609 [2024-11-26 20:26:59.855508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.609 [2024-11-26 20:26:59.858685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.609 [2024-11-26 20:26:59.858764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.609 [2024-11-26 20:26:59.858815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.609 [2024-11-26 20:26:59.858829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:06.609 { 00:14:06.609 "results": [ 00:14:06.609 { 00:14:06.609 "job": "raid_bdev1", 00:14:06.609 "core_mask": "0x1", 00:14:06.609 "workload": "randrw", 00:14:06.609 "percentage": 50, 00:14:06.609 "status": "finished", 00:14:06.609 "queue_depth": 1, 00:14:06.609 "io_size": 131072, 00:14:06.609 "runtime": 1.30582, 00:14:06.609 "iops": 13099.048873504771, 00:14:06.609 "mibps": 1637.3811091880964, 00:14:06.609 "io_failed": 1, 00:14:06.609 "io_timeout": 0, 00:14:06.609 "avg_latency_us": 106.09197804391523, 00:14:06.609 "min_latency_us": 31.972052401746726, 00:14:06.609 "max_latency_us": 1781.4917030567685 00:14:06.609 } 00:14:06.609 ], 00:14:06.609 "core_count": 1 00:14:06.609 } 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71461 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71461 ']' 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71461 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71461 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.609 killing process with pid 71461 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71461' 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71461 00:14:06.609 [2024-11-26 20:26:59.905343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.609 20:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71461 00:14:06.868 [2024-11-26 20:27:00.294643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WCVlkuHv4C 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:14:08.246 00:14:08.246 real 0m5.092s 00:14:08.246 user 0m6.027s 00:14:08.246 sys 0m0.604s 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.246 20:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.246 ************************************ 00:14:08.246 END TEST raid_write_error_test 00:14:08.246 ************************************ 00:14:08.246 20:27:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:08.246 20:27:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:08.246 20:27:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:08.246 20:27:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.246 20:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.246 ************************************ 00:14:08.246 START TEST raid_state_function_test 00:14:08.246 ************************************ 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:08.246 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71610 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71610' 00:14:08.506 Process raid pid: 71610 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71610 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71610 ']' 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.506 20:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.506 [2024-11-26 20:27:01.886854] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:08.506 [2024-11-26 20:27:01.887001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.765 [2024-11-26 20:27:02.067839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.765 [2024-11-26 20:27:02.202996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.025 [2024-11-26 20:27:02.438415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.025 [2024-11-26 20:27:02.438462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.593 [2024-11-26 20:27:02.844558] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.593 [2024-11-26 20:27:02.844636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.593 [2024-11-26 20:27:02.844649] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.593 [2024-11-26 20:27:02.844661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.593 [2024-11-26 20:27:02.844669] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.593 [2024-11-26 20:27:02.844679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.593 [2024-11-26 20:27:02.844686] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:09.593 [2024-11-26 20:27:02.844696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.593 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.594 "name": "Existed_Raid", 00:14:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.594 "strip_size_kb": 64, 00:14:09.594 "state": "configuring", 00:14:09.594 "raid_level": "concat", 00:14:09.594 "superblock": false, 00:14:09.594 "num_base_bdevs": 4, 00:14:09.594 "num_base_bdevs_discovered": 0, 00:14:09.594 "num_base_bdevs_operational": 4, 00:14:09.594 "base_bdevs_list": [ 00:14:09.594 { 00:14:09.594 "name": "BaseBdev1", 00:14:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.594 "is_configured": false, 00:14:09.594 "data_offset": 0, 00:14:09.594 "data_size": 0 00:14:09.594 }, 00:14:09.594 { 00:14:09.594 "name": "BaseBdev2", 00:14:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.594 "is_configured": false, 00:14:09.594 "data_offset": 0, 00:14:09.594 "data_size": 0 00:14:09.594 }, 00:14:09.594 { 00:14:09.594 "name": "BaseBdev3", 00:14:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.594 "is_configured": false, 00:14:09.594 "data_offset": 0, 00:14:09.594 "data_size": 0 00:14:09.594 }, 00:14:09.594 { 00:14:09.594 "name": "BaseBdev4", 00:14:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.594 "is_configured": false, 00:14:09.594 "data_offset": 0, 00:14:09.594 "data_size": 0 00:14:09.594 } 00:14:09.594 ] 00:14:09.594 }' 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.594 20:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 [2024-11-26 20:27:03.307790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.853 [2024-11-26 20:27:03.307849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 [2024-11-26 20:27:03.319796] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.853 [2024-11-26 20:27:03.319863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.853 [2024-11-26 20:27:03.319875] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.853 [2024-11-26 20:27:03.319886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.853 [2024-11-26 20:27:03.319894] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.853 [2024-11-26 20:27:03.319904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.853 [2024-11-26 20:27:03.319912] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:09.853 [2024-11-26 20:27:03.319922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 [2024-11-26 20:27:03.378973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.853 BaseBdev1 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.853 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.853 [ 00:14:09.853 { 00:14:09.853 "name": "BaseBdev1", 00:14:09.853 "aliases": [ 00:14:09.853 "a564edcf-82b1-45f4-be2c-8ebe0140f465" 00:14:09.853 ], 00:14:09.853 "product_name": "Malloc disk", 00:14:09.853 "block_size": 512, 00:14:09.854 "num_blocks": 65536, 00:14:09.854 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:09.854 "assigned_rate_limits": { 00:14:09.854 "rw_ios_per_sec": 0, 00:14:09.854 "rw_mbytes_per_sec": 0, 00:14:10.113 "r_mbytes_per_sec": 0, 00:14:10.113 "w_mbytes_per_sec": 0 00:14:10.113 }, 00:14:10.113 "claimed": true, 00:14:10.113 "claim_type": "exclusive_write", 00:14:10.113 "zoned": false, 00:14:10.113 "supported_io_types": { 00:14:10.113 "read": true, 00:14:10.113 "write": true, 00:14:10.113 "unmap": true, 00:14:10.113 "flush": true, 00:14:10.113 "reset": true, 00:14:10.113 "nvme_admin": false, 00:14:10.113 "nvme_io": false, 00:14:10.113 "nvme_io_md": false, 00:14:10.113 "write_zeroes": true, 00:14:10.113 "zcopy": true, 00:14:10.113 "get_zone_info": false, 00:14:10.113 "zone_management": false, 00:14:10.113 "zone_append": false, 00:14:10.113 "compare": false, 00:14:10.113 "compare_and_write": false, 00:14:10.113 "abort": true, 00:14:10.113 "seek_hole": false, 00:14:10.113 "seek_data": false, 00:14:10.113 "copy": true, 00:14:10.113 "nvme_iov_md": false 00:14:10.113 }, 00:14:10.113 "memory_domains": [ 00:14:10.113 { 00:14:10.113 "dma_device_id": "system", 00:14:10.113 "dma_device_type": 1 00:14:10.113 }, 00:14:10.113 { 00:14:10.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.113 "dma_device_type": 2 00:14:10.113 } 00:14:10.113 ], 00:14:10.113 "driver_specific": {} 00:14:10.113 } 00:14:10.113 ] 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.113 "name": "Existed_Raid", 00:14:10.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.113 "strip_size_kb": 64, 00:14:10.113 "state": "configuring", 00:14:10.113 "raid_level": "concat", 00:14:10.113 "superblock": false, 00:14:10.113 "num_base_bdevs": 4, 00:14:10.113 "num_base_bdevs_discovered": 1, 00:14:10.113 "num_base_bdevs_operational": 4, 00:14:10.113 "base_bdevs_list": [ 00:14:10.113 { 00:14:10.113 "name": "BaseBdev1", 00:14:10.113 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:10.113 "is_configured": true, 00:14:10.113 "data_offset": 0, 00:14:10.113 "data_size": 65536 00:14:10.113 }, 00:14:10.113 { 00:14:10.113 "name": "BaseBdev2", 00:14:10.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.113 "is_configured": false, 00:14:10.113 "data_offset": 0, 00:14:10.113 "data_size": 0 00:14:10.113 }, 00:14:10.113 { 00:14:10.113 "name": "BaseBdev3", 00:14:10.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.113 "is_configured": false, 00:14:10.113 "data_offset": 0, 00:14:10.113 "data_size": 0 00:14:10.113 }, 00:14:10.113 { 00:14:10.113 "name": "BaseBdev4", 00:14:10.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.113 "is_configured": false, 00:14:10.113 "data_offset": 0, 00:14:10.113 "data_size": 0 00:14:10.113 } 00:14:10.113 ] 00:14:10.113 }' 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.113 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.374 [2024-11-26 20:27:03.850331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.374 [2024-11-26 20:27:03.850506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.374 [2024-11-26 20:27:03.862405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.374 [2024-11-26 20:27:03.864706] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.374 [2024-11-26 20:27:03.864851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.374 [2024-11-26 20:27:03.864894] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.374 [2024-11-26 20:27:03.864939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.374 [2024-11-26 20:27:03.864978] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.374 [2024-11-26 20:27:03.865005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.374 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.375 "name": "Existed_Raid", 00:14:10.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.375 "strip_size_kb": 64, 00:14:10.375 "state": "configuring", 00:14:10.375 "raid_level": "concat", 00:14:10.375 "superblock": false, 00:14:10.375 "num_base_bdevs": 4, 00:14:10.375 "num_base_bdevs_discovered": 1, 00:14:10.375 "num_base_bdevs_operational": 4, 00:14:10.375 "base_bdevs_list": [ 00:14:10.375 { 00:14:10.375 "name": "BaseBdev1", 00:14:10.375 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:10.375 "is_configured": true, 00:14:10.375 "data_offset": 0, 00:14:10.375 "data_size": 65536 00:14:10.375 }, 00:14:10.375 { 00:14:10.375 "name": "BaseBdev2", 00:14:10.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.375 "is_configured": false, 00:14:10.375 "data_offset": 0, 00:14:10.375 "data_size": 0 00:14:10.375 }, 00:14:10.375 { 00:14:10.375 "name": "BaseBdev3", 00:14:10.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.375 "is_configured": false, 00:14:10.375 "data_offset": 0, 00:14:10.375 "data_size": 0 00:14:10.375 }, 00:14:10.375 { 00:14:10.375 "name": "BaseBdev4", 00:14:10.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.375 "is_configured": false, 00:14:10.375 "data_offset": 0, 00:14:10.375 "data_size": 0 00:14:10.375 } 00:14:10.375 ] 00:14:10.375 }' 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.375 20:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.959 [2024-11-26 20:27:04.375040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.959 BaseBdev2 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.959 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.959 [ 00:14:10.959 { 00:14:10.959 "name": "BaseBdev2", 00:14:10.959 "aliases": [ 00:14:10.959 "4dde4f50-72e3-408b-bab8-163d85ad7d58" 00:14:10.959 ], 00:14:10.959 "product_name": "Malloc disk", 00:14:10.959 "block_size": 512, 00:14:10.959 "num_blocks": 65536, 00:14:10.959 "uuid": "4dde4f50-72e3-408b-bab8-163d85ad7d58", 00:14:10.959 "assigned_rate_limits": { 00:14:10.959 "rw_ios_per_sec": 0, 00:14:10.959 "rw_mbytes_per_sec": 0, 00:14:10.959 "r_mbytes_per_sec": 0, 00:14:10.959 "w_mbytes_per_sec": 0 00:14:10.959 }, 00:14:10.959 "claimed": true, 00:14:10.959 "claim_type": "exclusive_write", 00:14:10.959 "zoned": false, 00:14:10.959 "supported_io_types": { 00:14:10.959 "read": true, 00:14:10.959 "write": true, 00:14:10.959 "unmap": true, 00:14:10.959 "flush": true, 00:14:10.959 "reset": true, 00:14:10.959 "nvme_admin": false, 00:14:10.959 "nvme_io": false, 00:14:10.959 "nvme_io_md": false, 00:14:10.959 "write_zeroes": true, 00:14:10.959 "zcopy": true, 00:14:10.959 "get_zone_info": false, 00:14:10.959 "zone_management": false, 00:14:10.959 "zone_append": false, 00:14:10.959 "compare": false, 00:14:10.959 "compare_and_write": false, 00:14:10.960 "abort": true, 00:14:10.960 "seek_hole": false, 00:14:10.960 "seek_data": false, 00:14:10.960 "copy": true, 00:14:10.960 "nvme_iov_md": false 00:14:10.960 }, 00:14:10.960 "memory_domains": [ 00:14:10.960 { 00:14:10.960 "dma_device_id": "system", 00:14:10.960 "dma_device_type": 1 00:14:10.960 }, 00:14:10.960 { 00:14:10.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.960 "dma_device_type": 2 00:14:10.960 } 00:14:10.960 ], 00:14:10.960 "driver_specific": {} 00:14:10.960 } 00:14:10.960 ] 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.960 "name": "Existed_Raid", 00:14:10.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.960 "strip_size_kb": 64, 00:14:10.960 "state": "configuring", 00:14:10.960 "raid_level": "concat", 00:14:10.960 "superblock": false, 00:14:10.960 "num_base_bdevs": 4, 00:14:10.960 "num_base_bdevs_discovered": 2, 00:14:10.960 "num_base_bdevs_operational": 4, 00:14:10.960 "base_bdevs_list": [ 00:14:10.960 { 00:14:10.960 "name": "BaseBdev1", 00:14:10.960 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:10.960 "is_configured": true, 00:14:10.960 "data_offset": 0, 00:14:10.960 "data_size": 65536 00:14:10.960 }, 00:14:10.960 { 00:14:10.960 "name": "BaseBdev2", 00:14:10.960 "uuid": "4dde4f50-72e3-408b-bab8-163d85ad7d58", 00:14:10.960 "is_configured": true, 00:14:10.960 "data_offset": 0, 00:14:10.960 "data_size": 65536 00:14:10.960 }, 00:14:10.960 { 00:14:10.960 "name": "BaseBdev3", 00:14:10.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.960 "is_configured": false, 00:14:10.960 "data_offset": 0, 00:14:10.960 "data_size": 0 00:14:10.960 }, 00:14:10.960 { 00:14:10.960 "name": "BaseBdev4", 00:14:10.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.960 "is_configured": false, 00:14:10.960 "data_offset": 0, 00:14:10.960 "data_size": 0 00:14:10.960 } 00:14:10.960 ] 00:14:10.960 }' 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.960 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 [2024-11-26 20:27:04.919467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.528 BaseBdev3 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 [ 00:14:11.528 { 00:14:11.528 "name": "BaseBdev3", 00:14:11.528 "aliases": [ 00:14:11.528 "0790779b-5591-4348-97dd-e048128ff12b" 00:14:11.528 ], 00:14:11.528 "product_name": "Malloc disk", 00:14:11.528 "block_size": 512, 00:14:11.528 "num_blocks": 65536, 00:14:11.528 "uuid": "0790779b-5591-4348-97dd-e048128ff12b", 00:14:11.528 "assigned_rate_limits": { 00:14:11.528 "rw_ios_per_sec": 0, 00:14:11.528 "rw_mbytes_per_sec": 0, 00:14:11.528 "r_mbytes_per_sec": 0, 00:14:11.528 "w_mbytes_per_sec": 0 00:14:11.528 }, 00:14:11.528 "claimed": true, 00:14:11.528 "claim_type": "exclusive_write", 00:14:11.528 "zoned": false, 00:14:11.528 "supported_io_types": { 00:14:11.528 "read": true, 00:14:11.528 "write": true, 00:14:11.528 "unmap": true, 00:14:11.528 "flush": true, 00:14:11.528 "reset": true, 00:14:11.528 "nvme_admin": false, 00:14:11.528 "nvme_io": false, 00:14:11.528 "nvme_io_md": false, 00:14:11.528 "write_zeroes": true, 00:14:11.528 "zcopy": true, 00:14:11.528 "get_zone_info": false, 00:14:11.528 "zone_management": false, 00:14:11.528 "zone_append": false, 00:14:11.528 "compare": false, 00:14:11.528 "compare_and_write": false, 00:14:11.528 "abort": true, 00:14:11.528 "seek_hole": false, 00:14:11.528 "seek_data": false, 00:14:11.528 "copy": true, 00:14:11.528 "nvme_iov_md": false 00:14:11.528 }, 00:14:11.528 "memory_domains": [ 00:14:11.528 { 00:14:11.528 "dma_device_id": "system", 00:14:11.528 "dma_device_type": 1 00:14:11.528 }, 00:14:11.528 { 00:14:11.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.528 "dma_device_type": 2 00:14:11.528 } 00:14:11.528 ], 00:14:11.528 "driver_specific": {} 00:14:11.528 } 00:14:11.528 ] 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.528 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.529 20:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.529 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.529 "name": "Existed_Raid", 00:14:11.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.529 "strip_size_kb": 64, 00:14:11.529 "state": "configuring", 00:14:11.529 "raid_level": "concat", 00:14:11.529 "superblock": false, 00:14:11.529 "num_base_bdevs": 4, 00:14:11.529 "num_base_bdevs_discovered": 3, 00:14:11.529 "num_base_bdevs_operational": 4, 00:14:11.529 "base_bdevs_list": [ 00:14:11.529 { 00:14:11.529 "name": "BaseBdev1", 00:14:11.529 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:11.529 "is_configured": true, 00:14:11.529 "data_offset": 0, 00:14:11.529 "data_size": 65536 00:14:11.529 }, 00:14:11.529 { 00:14:11.529 "name": "BaseBdev2", 00:14:11.529 "uuid": "4dde4f50-72e3-408b-bab8-163d85ad7d58", 00:14:11.529 "is_configured": true, 00:14:11.529 "data_offset": 0, 00:14:11.529 "data_size": 65536 00:14:11.529 }, 00:14:11.529 { 00:14:11.529 "name": "BaseBdev3", 00:14:11.529 "uuid": "0790779b-5591-4348-97dd-e048128ff12b", 00:14:11.529 "is_configured": true, 00:14:11.529 "data_offset": 0, 00:14:11.529 "data_size": 65536 00:14:11.529 }, 00:14:11.529 { 00:14:11.529 "name": "BaseBdev4", 00:14:11.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.529 "is_configured": false, 00:14:11.529 "data_offset": 0, 00:14:11.529 "data_size": 0 00:14:11.529 } 00:14:11.529 ] 00:14:11.529 }' 00:14:11.529 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.529 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.097 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:12.097 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.097 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.097 [2024-11-26 20:27:05.417346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.097 [2024-11-26 20:27:05.417411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:12.097 [2024-11-26 20:27:05.417421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:12.097 [2024-11-26 20:27:05.417749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:12.098 [2024-11-26 20:27:05.417940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:12.098 [2024-11-26 20:27:05.417956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:12.098 [2024-11-26 20:27:05.418283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.098 BaseBdev4 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.098 [ 00:14:12.098 { 00:14:12.098 "name": "BaseBdev4", 00:14:12.098 "aliases": [ 00:14:12.098 "d80334e5-7928-4f99-a21b-ffbbd3190cc7" 00:14:12.098 ], 00:14:12.098 "product_name": "Malloc disk", 00:14:12.098 "block_size": 512, 00:14:12.098 "num_blocks": 65536, 00:14:12.098 "uuid": "d80334e5-7928-4f99-a21b-ffbbd3190cc7", 00:14:12.098 "assigned_rate_limits": { 00:14:12.098 "rw_ios_per_sec": 0, 00:14:12.098 "rw_mbytes_per_sec": 0, 00:14:12.098 "r_mbytes_per_sec": 0, 00:14:12.098 "w_mbytes_per_sec": 0 00:14:12.098 }, 00:14:12.098 "claimed": true, 00:14:12.098 "claim_type": "exclusive_write", 00:14:12.098 "zoned": false, 00:14:12.098 "supported_io_types": { 00:14:12.098 "read": true, 00:14:12.098 "write": true, 00:14:12.098 "unmap": true, 00:14:12.098 "flush": true, 00:14:12.098 "reset": true, 00:14:12.098 "nvme_admin": false, 00:14:12.098 "nvme_io": false, 00:14:12.098 "nvme_io_md": false, 00:14:12.098 "write_zeroes": true, 00:14:12.098 "zcopy": true, 00:14:12.098 "get_zone_info": false, 00:14:12.098 "zone_management": false, 00:14:12.098 "zone_append": false, 00:14:12.098 "compare": false, 00:14:12.098 "compare_and_write": false, 00:14:12.098 "abort": true, 00:14:12.098 "seek_hole": false, 00:14:12.098 "seek_data": false, 00:14:12.098 "copy": true, 00:14:12.098 "nvme_iov_md": false 00:14:12.098 }, 00:14:12.098 "memory_domains": [ 00:14:12.098 { 00:14:12.098 "dma_device_id": "system", 00:14:12.098 "dma_device_type": 1 00:14:12.098 }, 00:14:12.098 { 00:14:12.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.098 "dma_device_type": 2 00:14:12.098 } 00:14:12.098 ], 00:14:12.098 "driver_specific": {} 00:14:12.098 } 00:14:12.098 ] 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.098 "name": "Existed_Raid", 00:14:12.098 "uuid": "6651d029-2c1a-43d0-91a8-4363606d5603", 00:14:12.098 "strip_size_kb": 64, 00:14:12.098 "state": "online", 00:14:12.098 "raid_level": "concat", 00:14:12.098 "superblock": false, 00:14:12.098 "num_base_bdevs": 4, 00:14:12.098 "num_base_bdevs_discovered": 4, 00:14:12.098 "num_base_bdevs_operational": 4, 00:14:12.098 "base_bdevs_list": [ 00:14:12.098 { 00:14:12.098 "name": "BaseBdev1", 00:14:12.098 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:12.098 "is_configured": true, 00:14:12.098 "data_offset": 0, 00:14:12.098 "data_size": 65536 00:14:12.098 }, 00:14:12.098 { 00:14:12.098 "name": "BaseBdev2", 00:14:12.098 "uuid": "4dde4f50-72e3-408b-bab8-163d85ad7d58", 00:14:12.098 "is_configured": true, 00:14:12.098 "data_offset": 0, 00:14:12.098 "data_size": 65536 00:14:12.098 }, 00:14:12.098 { 00:14:12.098 "name": "BaseBdev3", 00:14:12.098 "uuid": "0790779b-5591-4348-97dd-e048128ff12b", 00:14:12.098 "is_configured": true, 00:14:12.098 "data_offset": 0, 00:14:12.098 "data_size": 65536 00:14:12.098 }, 00:14:12.098 { 00:14:12.098 "name": "BaseBdev4", 00:14:12.098 "uuid": "d80334e5-7928-4f99-a21b-ffbbd3190cc7", 00:14:12.098 "is_configured": true, 00:14:12.098 "data_offset": 0, 00:14:12.098 "data_size": 65536 00:14:12.098 } 00:14:12.098 ] 00:14:12.098 }' 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.098 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.669 [2024-11-26 20:27:05.960961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.669 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.669 "name": "Existed_Raid", 00:14:12.669 "aliases": [ 00:14:12.669 "6651d029-2c1a-43d0-91a8-4363606d5603" 00:14:12.669 ], 00:14:12.669 "product_name": "Raid Volume", 00:14:12.669 "block_size": 512, 00:14:12.669 "num_blocks": 262144, 00:14:12.669 "uuid": "6651d029-2c1a-43d0-91a8-4363606d5603", 00:14:12.670 "assigned_rate_limits": { 00:14:12.670 "rw_ios_per_sec": 0, 00:14:12.670 "rw_mbytes_per_sec": 0, 00:14:12.670 "r_mbytes_per_sec": 0, 00:14:12.670 "w_mbytes_per_sec": 0 00:14:12.670 }, 00:14:12.670 "claimed": false, 00:14:12.670 "zoned": false, 00:14:12.670 "supported_io_types": { 00:14:12.670 "read": true, 00:14:12.670 "write": true, 00:14:12.670 "unmap": true, 00:14:12.670 "flush": true, 00:14:12.670 "reset": true, 00:14:12.670 "nvme_admin": false, 00:14:12.670 "nvme_io": false, 00:14:12.670 "nvme_io_md": false, 00:14:12.670 "write_zeroes": true, 00:14:12.670 "zcopy": false, 00:14:12.670 "get_zone_info": false, 00:14:12.670 "zone_management": false, 00:14:12.670 "zone_append": false, 00:14:12.670 "compare": false, 00:14:12.670 "compare_and_write": false, 00:14:12.670 "abort": false, 00:14:12.670 "seek_hole": false, 00:14:12.670 "seek_data": false, 00:14:12.670 "copy": false, 00:14:12.670 "nvme_iov_md": false 00:14:12.670 }, 00:14:12.670 "memory_domains": [ 00:14:12.670 { 00:14:12.670 "dma_device_id": "system", 00:14:12.670 "dma_device_type": 1 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.670 "dma_device_type": 2 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "system", 00:14:12.670 "dma_device_type": 1 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.670 "dma_device_type": 2 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "system", 00:14:12.670 "dma_device_type": 1 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.670 "dma_device_type": 2 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "system", 00:14:12.670 "dma_device_type": 1 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.670 "dma_device_type": 2 00:14:12.670 } 00:14:12.670 ], 00:14:12.670 "driver_specific": { 00:14:12.670 "raid": { 00:14:12.670 "uuid": "6651d029-2c1a-43d0-91a8-4363606d5603", 00:14:12.670 "strip_size_kb": 64, 00:14:12.670 "state": "online", 00:14:12.670 "raid_level": "concat", 00:14:12.670 "superblock": false, 00:14:12.670 "num_base_bdevs": 4, 00:14:12.670 "num_base_bdevs_discovered": 4, 00:14:12.670 "num_base_bdevs_operational": 4, 00:14:12.670 "base_bdevs_list": [ 00:14:12.670 { 00:14:12.670 "name": "BaseBdev1", 00:14:12.670 "uuid": "a564edcf-82b1-45f4-be2c-8ebe0140f465", 00:14:12.670 "is_configured": true, 00:14:12.670 "data_offset": 0, 00:14:12.670 "data_size": 65536 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "name": "BaseBdev2", 00:14:12.670 "uuid": "4dde4f50-72e3-408b-bab8-163d85ad7d58", 00:14:12.670 "is_configured": true, 00:14:12.670 "data_offset": 0, 00:14:12.670 "data_size": 65536 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "name": "BaseBdev3", 00:14:12.670 "uuid": "0790779b-5591-4348-97dd-e048128ff12b", 00:14:12.670 "is_configured": true, 00:14:12.670 "data_offset": 0, 00:14:12.670 "data_size": 65536 00:14:12.670 }, 00:14:12.670 { 00:14:12.670 "name": "BaseBdev4", 00:14:12.670 "uuid": "d80334e5-7928-4f99-a21b-ffbbd3190cc7", 00:14:12.670 "is_configured": true, 00:14:12.670 "data_offset": 0, 00:14:12.670 "data_size": 65536 00:14:12.670 } 00:14:12.670 ] 00:14:12.670 } 00:14:12.670 } 00:14:12.670 }' 00:14:12.670 20:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:12.670 BaseBdev2 00:14:12.670 BaseBdev3 00:14:12.670 BaseBdev4' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.670 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.930 [2024-11-26 20:27:06.248158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.930 [2024-11-26 20:27:06.248315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.930 [2024-11-26 20:27:06.248404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.930 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.931 "name": "Existed_Raid", 00:14:12.931 "uuid": "6651d029-2c1a-43d0-91a8-4363606d5603", 00:14:12.931 "strip_size_kb": 64, 00:14:12.931 "state": "offline", 00:14:12.931 "raid_level": "concat", 00:14:12.931 "superblock": false, 00:14:12.931 "num_base_bdevs": 4, 00:14:12.931 "num_base_bdevs_discovered": 3, 00:14:12.931 "num_base_bdevs_operational": 3, 00:14:12.931 "base_bdevs_list": [ 00:14:12.931 { 00:14:12.931 "name": null, 00:14:12.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.931 "is_configured": false, 00:14:12.931 "data_offset": 0, 00:14:12.931 "data_size": 65536 00:14:12.931 }, 00:14:12.931 { 00:14:12.931 "name": "BaseBdev2", 00:14:12.931 "uuid": "4dde4f50-72e3-408b-bab8-163d85ad7d58", 00:14:12.931 "is_configured": true, 00:14:12.931 "data_offset": 0, 00:14:12.931 "data_size": 65536 00:14:12.931 }, 00:14:12.931 { 00:14:12.931 "name": "BaseBdev3", 00:14:12.931 "uuid": "0790779b-5591-4348-97dd-e048128ff12b", 00:14:12.931 "is_configured": true, 00:14:12.931 "data_offset": 0, 00:14:12.931 "data_size": 65536 00:14:12.931 }, 00:14:12.931 { 00:14:12.931 "name": "BaseBdev4", 00:14:12.931 "uuid": "d80334e5-7928-4f99-a21b-ffbbd3190cc7", 00:14:12.931 "is_configured": true, 00:14:12.931 "data_offset": 0, 00:14:12.931 "data_size": 65536 00:14:12.931 } 00:14:12.931 ] 00:14:12.931 }' 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.931 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.500 20:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.500 [2024-11-26 20:27:06.898364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.500 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.500 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:13.500 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.500 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.501 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.501 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.501 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.501 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.760 [2024-11-26 20:27:07.080726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.760 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.760 [2024-11-26 20:27:07.257513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:13.760 [2024-11-26 20:27:07.257681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.019 BaseBdev2 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.019 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.019 [ 00:14:14.019 { 00:14:14.019 "name": "BaseBdev2", 00:14:14.019 "aliases": [ 00:14:14.019 "124211b9-c228-4c41-b4e9-13f3cf298a5d" 00:14:14.019 ], 00:14:14.019 "product_name": "Malloc disk", 00:14:14.020 "block_size": 512, 00:14:14.020 "num_blocks": 65536, 00:14:14.020 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:14.020 "assigned_rate_limits": { 00:14:14.020 "rw_ios_per_sec": 0, 00:14:14.020 "rw_mbytes_per_sec": 0, 00:14:14.020 "r_mbytes_per_sec": 0, 00:14:14.020 "w_mbytes_per_sec": 0 00:14:14.020 }, 00:14:14.020 "claimed": false, 00:14:14.020 "zoned": false, 00:14:14.020 "supported_io_types": { 00:14:14.020 "read": true, 00:14:14.020 "write": true, 00:14:14.020 "unmap": true, 00:14:14.020 "flush": true, 00:14:14.020 "reset": true, 00:14:14.020 "nvme_admin": false, 00:14:14.020 "nvme_io": false, 00:14:14.020 "nvme_io_md": false, 00:14:14.020 "write_zeroes": true, 00:14:14.020 "zcopy": true, 00:14:14.020 "get_zone_info": false, 00:14:14.020 "zone_management": false, 00:14:14.020 "zone_append": false, 00:14:14.020 "compare": false, 00:14:14.020 "compare_and_write": false, 00:14:14.020 "abort": true, 00:14:14.020 "seek_hole": false, 00:14:14.020 "seek_data": false, 00:14:14.020 "copy": true, 00:14:14.020 "nvme_iov_md": false 00:14:14.020 }, 00:14:14.020 "memory_domains": [ 00:14:14.020 { 00:14:14.020 "dma_device_id": "system", 00:14:14.020 "dma_device_type": 1 00:14:14.020 }, 00:14:14.020 { 00:14:14.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.020 "dma_device_type": 2 00:14:14.020 } 00:14:14.020 ], 00:14:14.020 "driver_specific": {} 00:14:14.020 } 00:14:14.020 ] 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 BaseBdev3 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.280 [ 00:14:14.280 { 00:14:14.280 "name": "BaseBdev3", 00:14:14.280 "aliases": [ 00:14:14.280 "5320c003-00a3-461d-9859-63adac59a7d1" 00:14:14.280 ], 00:14:14.280 "product_name": "Malloc disk", 00:14:14.280 "block_size": 512, 00:14:14.280 "num_blocks": 65536, 00:14:14.280 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:14.280 "assigned_rate_limits": { 00:14:14.280 "rw_ios_per_sec": 0, 00:14:14.280 "rw_mbytes_per_sec": 0, 00:14:14.280 "r_mbytes_per_sec": 0, 00:14:14.280 "w_mbytes_per_sec": 0 00:14:14.280 }, 00:14:14.280 "claimed": false, 00:14:14.280 "zoned": false, 00:14:14.280 "supported_io_types": { 00:14:14.280 "read": true, 00:14:14.280 "write": true, 00:14:14.280 "unmap": true, 00:14:14.280 "flush": true, 00:14:14.280 "reset": true, 00:14:14.280 "nvme_admin": false, 00:14:14.280 "nvme_io": false, 00:14:14.280 "nvme_io_md": false, 00:14:14.280 "write_zeroes": true, 00:14:14.280 "zcopy": true, 00:14:14.280 "get_zone_info": false, 00:14:14.280 "zone_management": false, 00:14:14.280 "zone_append": false, 00:14:14.280 "compare": false, 00:14:14.280 "compare_and_write": false, 00:14:14.280 "abort": true, 00:14:14.281 "seek_hole": false, 00:14:14.281 "seek_data": false, 00:14:14.281 "copy": true, 00:14:14.281 "nvme_iov_md": false 00:14:14.281 }, 00:14:14.281 "memory_domains": [ 00:14:14.281 { 00:14:14.281 "dma_device_id": "system", 00:14:14.281 "dma_device_type": 1 00:14:14.281 }, 00:14:14.281 { 00:14:14.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.281 "dma_device_type": 2 00:14:14.281 } 00:14:14.281 ], 00:14:14.281 "driver_specific": {} 00:14:14.281 } 00:14:14.281 ] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.281 BaseBdev4 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.281 [ 00:14:14.281 { 00:14:14.281 "name": "BaseBdev4", 00:14:14.281 "aliases": [ 00:14:14.281 "1113e136-883b-40d6-b2b2-8a44cfc5a8cb" 00:14:14.281 ], 00:14:14.281 "product_name": "Malloc disk", 00:14:14.281 "block_size": 512, 00:14:14.281 "num_blocks": 65536, 00:14:14.281 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:14.281 "assigned_rate_limits": { 00:14:14.281 "rw_ios_per_sec": 0, 00:14:14.281 "rw_mbytes_per_sec": 0, 00:14:14.281 "r_mbytes_per_sec": 0, 00:14:14.281 "w_mbytes_per_sec": 0 00:14:14.281 }, 00:14:14.281 "claimed": false, 00:14:14.281 "zoned": false, 00:14:14.281 "supported_io_types": { 00:14:14.281 "read": true, 00:14:14.281 "write": true, 00:14:14.281 "unmap": true, 00:14:14.281 "flush": true, 00:14:14.281 "reset": true, 00:14:14.281 "nvme_admin": false, 00:14:14.281 "nvme_io": false, 00:14:14.281 "nvme_io_md": false, 00:14:14.281 "write_zeroes": true, 00:14:14.281 "zcopy": true, 00:14:14.281 "get_zone_info": false, 00:14:14.281 "zone_management": false, 00:14:14.281 "zone_append": false, 00:14:14.281 "compare": false, 00:14:14.281 "compare_and_write": false, 00:14:14.281 "abort": true, 00:14:14.281 "seek_hole": false, 00:14:14.281 "seek_data": false, 00:14:14.281 "copy": true, 00:14:14.281 "nvme_iov_md": false 00:14:14.281 }, 00:14:14.281 "memory_domains": [ 00:14:14.281 { 00:14:14.281 "dma_device_id": "system", 00:14:14.281 "dma_device_type": 1 00:14:14.281 }, 00:14:14.281 { 00:14:14.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.281 "dma_device_type": 2 00:14:14.281 } 00:14:14.281 ], 00:14:14.281 "driver_specific": {} 00:14:14.281 } 00:14:14.281 ] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.281 [2024-11-26 20:27:07.683322] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.281 [2024-11-26 20:27:07.683500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.281 [2024-11-26 20:27:07.683546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.281 [2024-11-26 20:27:07.685866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.281 [2024-11-26 20:27:07.685937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.281 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.281 "name": "Existed_Raid", 00:14:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.281 "strip_size_kb": 64, 00:14:14.281 "state": "configuring", 00:14:14.281 "raid_level": "concat", 00:14:14.282 "superblock": false, 00:14:14.282 "num_base_bdevs": 4, 00:14:14.282 "num_base_bdevs_discovered": 3, 00:14:14.282 "num_base_bdevs_operational": 4, 00:14:14.282 "base_bdevs_list": [ 00:14:14.282 { 00:14:14.282 "name": "BaseBdev1", 00:14:14.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.282 "is_configured": false, 00:14:14.282 "data_offset": 0, 00:14:14.282 "data_size": 0 00:14:14.282 }, 00:14:14.282 { 00:14:14.282 "name": "BaseBdev2", 00:14:14.282 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:14.282 "is_configured": true, 00:14:14.282 "data_offset": 0, 00:14:14.282 "data_size": 65536 00:14:14.282 }, 00:14:14.282 { 00:14:14.282 "name": "BaseBdev3", 00:14:14.282 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:14.282 "is_configured": true, 00:14:14.282 "data_offset": 0, 00:14:14.282 "data_size": 65536 00:14:14.282 }, 00:14:14.282 { 00:14:14.282 "name": "BaseBdev4", 00:14:14.282 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:14.282 "is_configured": true, 00:14:14.282 "data_offset": 0, 00:14:14.282 "data_size": 65536 00:14:14.282 } 00:14:14.282 ] 00:14:14.282 }' 00:14:14.282 20:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.282 20:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.850 [2024-11-26 20:27:08.186450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.850 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.851 "name": "Existed_Raid", 00:14:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.851 "strip_size_kb": 64, 00:14:14.851 "state": "configuring", 00:14:14.851 "raid_level": "concat", 00:14:14.851 "superblock": false, 00:14:14.851 "num_base_bdevs": 4, 00:14:14.851 "num_base_bdevs_discovered": 2, 00:14:14.851 "num_base_bdevs_operational": 4, 00:14:14.851 "base_bdevs_list": [ 00:14:14.851 { 00:14:14.851 "name": "BaseBdev1", 00:14:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.851 "is_configured": false, 00:14:14.851 "data_offset": 0, 00:14:14.851 "data_size": 0 00:14:14.851 }, 00:14:14.851 { 00:14:14.851 "name": null, 00:14:14.851 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:14.851 "is_configured": false, 00:14:14.851 "data_offset": 0, 00:14:14.851 "data_size": 65536 00:14:14.851 }, 00:14:14.851 { 00:14:14.851 "name": "BaseBdev3", 00:14:14.851 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:14.851 "is_configured": true, 00:14:14.851 "data_offset": 0, 00:14:14.851 "data_size": 65536 00:14:14.851 }, 00:14:14.851 { 00:14:14.851 "name": "BaseBdev4", 00:14:14.851 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:14.851 "is_configured": true, 00:14:14.851 "data_offset": 0, 00:14:14.851 "data_size": 65536 00:14:14.851 } 00:14:14.851 ] 00:14:14.851 }' 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.851 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 [2024-11-26 20:27:08.788570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.419 BaseBdev1 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.419 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.420 [ 00:14:15.420 { 00:14:15.420 "name": "BaseBdev1", 00:14:15.420 "aliases": [ 00:14:15.420 "4c04961a-0f3e-4d18-973a-8289d67190f4" 00:14:15.420 ], 00:14:15.420 "product_name": "Malloc disk", 00:14:15.420 "block_size": 512, 00:14:15.420 "num_blocks": 65536, 00:14:15.420 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:15.420 "assigned_rate_limits": { 00:14:15.420 "rw_ios_per_sec": 0, 00:14:15.420 "rw_mbytes_per_sec": 0, 00:14:15.420 "r_mbytes_per_sec": 0, 00:14:15.420 "w_mbytes_per_sec": 0 00:14:15.420 }, 00:14:15.420 "claimed": true, 00:14:15.420 "claim_type": "exclusive_write", 00:14:15.420 "zoned": false, 00:14:15.420 "supported_io_types": { 00:14:15.420 "read": true, 00:14:15.420 "write": true, 00:14:15.420 "unmap": true, 00:14:15.420 "flush": true, 00:14:15.420 "reset": true, 00:14:15.420 "nvme_admin": false, 00:14:15.420 "nvme_io": false, 00:14:15.420 "nvme_io_md": false, 00:14:15.420 "write_zeroes": true, 00:14:15.420 "zcopy": true, 00:14:15.420 "get_zone_info": false, 00:14:15.420 "zone_management": false, 00:14:15.420 "zone_append": false, 00:14:15.420 "compare": false, 00:14:15.420 "compare_and_write": false, 00:14:15.420 "abort": true, 00:14:15.420 "seek_hole": false, 00:14:15.420 "seek_data": false, 00:14:15.420 "copy": true, 00:14:15.420 "nvme_iov_md": false 00:14:15.420 }, 00:14:15.420 "memory_domains": [ 00:14:15.420 { 00:14:15.420 "dma_device_id": "system", 00:14:15.420 "dma_device_type": 1 00:14:15.420 }, 00:14:15.420 { 00:14:15.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.420 "dma_device_type": 2 00:14:15.420 } 00:14:15.420 ], 00:14:15.420 "driver_specific": {} 00:14:15.420 } 00:14:15.420 ] 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.420 "name": "Existed_Raid", 00:14:15.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.420 "strip_size_kb": 64, 00:14:15.420 "state": "configuring", 00:14:15.420 "raid_level": "concat", 00:14:15.420 "superblock": false, 00:14:15.420 "num_base_bdevs": 4, 00:14:15.420 "num_base_bdevs_discovered": 3, 00:14:15.420 "num_base_bdevs_operational": 4, 00:14:15.420 "base_bdevs_list": [ 00:14:15.420 { 00:14:15.420 "name": "BaseBdev1", 00:14:15.420 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:15.420 "is_configured": true, 00:14:15.420 "data_offset": 0, 00:14:15.420 "data_size": 65536 00:14:15.420 }, 00:14:15.420 { 00:14:15.420 "name": null, 00:14:15.420 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:15.420 "is_configured": false, 00:14:15.420 "data_offset": 0, 00:14:15.420 "data_size": 65536 00:14:15.420 }, 00:14:15.420 { 00:14:15.420 "name": "BaseBdev3", 00:14:15.420 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:15.420 "is_configured": true, 00:14:15.420 "data_offset": 0, 00:14:15.420 "data_size": 65536 00:14:15.420 }, 00:14:15.420 { 00:14:15.420 "name": "BaseBdev4", 00:14:15.420 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:15.420 "is_configured": true, 00:14:15.420 "data_offset": 0, 00:14:15.420 "data_size": 65536 00:14:15.420 } 00:14:15.420 ] 00:14:15.420 }' 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.420 20:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.996 [2024-11-26 20:27:09.363753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.996 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.997 "name": "Existed_Raid", 00:14:15.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.997 "strip_size_kb": 64, 00:14:15.997 "state": "configuring", 00:14:15.997 "raid_level": "concat", 00:14:15.997 "superblock": false, 00:14:15.997 "num_base_bdevs": 4, 00:14:15.997 "num_base_bdevs_discovered": 2, 00:14:15.997 "num_base_bdevs_operational": 4, 00:14:15.997 "base_bdevs_list": [ 00:14:15.997 { 00:14:15.997 "name": "BaseBdev1", 00:14:15.997 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:15.997 "is_configured": true, 00:14:15.997 "data_offset": 0, 00:14:15.997 "data_size": 65536 00:14:15.997 }, 00:14:15.997 { 00:14:15.997 "name": null, 00:14:15.997 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:15.997 "is_configured": false, 00:14:15.997 "data_offset": 0, 00:14:15.997 "data_size": 65536 00:14:15.997 }, 00:14:15.997 { 00:14:15.997 "name": null, 00:14:15.997 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:15.997 "is_configured": false, 00:14:15.997 "data_offset": 0, 00:14:15.997 "data_size": 65536 00:14:15.997 }, 00:14:15.997 { 00:14:15.997 "name": "BaseBdev4", 00:14:15.997 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:15.997 "is_configured": true, 00:14:15.997 "data_offset": 0, 00:14:15.997 "data_size": 65536 00:14:15.997 } 00:14:15.997 ] 00:14:15.997 }' 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.997 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.566 [2024-11-26 20:27:09.870887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.566 "name": "Existed_Raid", 00:14:16.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.566 "strip_size_kb": 64, 00:14:16.566 "state": "configuring", 00:14:16.566 "raid_level": "concat", 00:14:16.566 "superblock": false, 00:14:16.566 "num_base_bdevs": 4, 00:14:16.566 "num_base_bdevs_discovered": 3, 00:14:16.566 "num_base_bdevs_operational": 4, 00:14:16.566 "base_bdevs_list": [ 00:14:16.566 { 00:14:16.566 "name": "BaseBdev1", 00:14:16.566 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:16.566 "is_configured": true, 00:14:16.566 "data_offset": 0, 00:14:16.566 "data_size": 65536 00:14:16.566 }, 00:14:16.566 { 00:14:16.566 "name": null, 00:14:16.566 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:16.566 "is_configured": false, 00:14:16.566 "data_offset": 0, 00:14:16.566 "data_size": 65536 00:14:16.566 }, 00:14:16.566 { 00:14:16.566 "name": "BaseBdev3", 00:14:16.566 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:16.566 "is_configured": true, 00:14:16.566 "data_offset": 0, 00:14:16.566 "data_size": 65536 00:14:16.566 }, 00:14:16.566 { 00:14:16.566 "name": "BaseBdev4", 00:14:16.566 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:16.566 "is_configured": true, 00:14:16.566 "data_offset": 0, 00:14:16.566 "data_size": 65536 00:14:16.566 } 00:14:16.566 ] 00:14:16.566 }' 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.566 20:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.826 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.826 [2024-11-26 20:27:10.338142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.085 "name": "Existed_Raid", 00:14:17.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.085 "strip_size_kb": 64, 00:14:17.085 "state": "configuring", 00:14:17.085 "raid_level": "concat", 00:14:17.085 "superblock": false, 00:14:17.085 "num_base_bdevs": 4, 00:14:17.085 "num_base_bdevs_discovered": 2, 00:14:17.085 "num_base_bdevs_operational": 4, 00:14:17.085 "base_bdevs_list": [ 00:14:17.085 { 00:14:17.085 "name": null, 00:14:17.085 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:17.085 "is_configured": false, 00:14:17.085 "data_offset": 0, 00:14:17.085 "data_size": 65536 00:14:17.085 }, 00:14:17.085 { 00:14:17.085 "name": null, 00:14:17.085 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:17.085 "is_configured": false, 00:14:17.085 "data_offset": 0, 00:14:17.085 "data_size": 65536 00:14:17.085 }, 00:14:17.085 { 00:14:17.085 "name": "BaseBdev3", 00:14:17.085 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:17.085 "is_configured": true, 00:14:17.085 "data_offset": 0, 00:14:17.085 "data_size": 65536 00:14:17.085 }, 00:14:17.085 { 00:14:17.085 "name": "BaseBdev4", 00:14:17.085 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:17.085 "is_configured": true, 00:14:17.085 "data_offset": 0, 00:14:17.085 "data_size": 65536 00:14:17.085 } 00:14:17.085 ] 00:14:17.085 }' 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.085 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.345 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.345 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.345 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.345 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:17.604 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.604 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:17.604 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:17.604 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 [2024-11-26 20:27:10.953006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.605 "name": "Existed_Raid", 00:14:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.605 "strip_size_kb": 64, 00:14:17.605 "state": "configuring", 00:14:17.605 "raid_level": "concat", 00:14:17.605 "superblock": false, 00:14:17.605 "num_base_bdevs": 4, 00:14:17.605 "num_base_bdevs_discovered": 3, 00:14:17.605 "num_base_bdevs_operational": 4, 00:14:17.605 "base_bdevs_list": [ 00:14:17.605 { 00:14:17.605 "name": null, 00:14:17.605 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:17.605 "is_configured": false, 00:14:17.605 "data_offset": 0, 00:14:17.605 "data_size": 65536 00:14:17.605 }, 00:14:17.605 { 00:14:17.605 "name": "BaseBdev2", 00:14:17.605 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:17.605 "is_configured": true, 00:14:17.605 "data_offset": 0, 00:14:17.605 "data_size": 65536 00:14:17.605 }, 00:14:17.605 { 00:14:17.605 "name": "BaseBdev3", 00:14:17.605 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:17.605 "is_configured": true, 00:14:17.605 "data_offset": 0, 00:14:17.605 "data_size": 65536 00:14:17.605 }, 00:14:17.605 { 00:14:17.605 "name": "BaseBdev4", 00:14:17.605 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:17.605 "is_configured": true, 00:14:17.605 "data_offset": 0, 00:14:17.605 "data_size": 65536 00:14:17.605 } 00:14:17.605 ] 00:14:17.605 }' 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.605 20:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.864 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.864 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:17.864 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.864 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.864 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.864 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c04961a-0f3e-4d18-973a-8289d67190f4 00:14:18.122 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.123 [2024-11-26 20:27:11.485234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:18.123 [2024-11-26 20:27:11.485377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:18.123 [2024-11-26 20:27:11.485396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:18.123 [2024-11-26 20:27:11.485684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:18.123 [2024-11-26 20:27:11.485835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:18.123 [2024-11-26 20:27:11.485847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:18.123 [2024-11-26 20:27:11.486081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.123 NewBaseBdev 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.123 [ 00:14:18.123 { 00:14:18.123 "name": "NewBaseBdev", 00:14:18.123 "aliases": [ 00:14:18.123 "4c04961a-0f3e-4d18-973a-8289d67190f4" 00:14:18.123 ], 00:14:18.123 "product_name": "Malloc disk", 00:14:18.123 "block_size": 512, 00:14:18.123 "num_blocks": 65536, 00:14:18.123 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:18.123 "assigned_rate_limits": { 00:14:18.123 "rw_ios_per_sec": 0, 00:14:18.123 "rw_mbytes_per_sec": 0, 00:14:18.123 "r_mbytes_per_sec": 0, 00:14:18.123 "w_mbytes_per_sec": 0 00:14:18.123 }, 00:14:18.123 "claimed": true, 00:14:18.123 "claim_type": "exclusive_write", 00:14:18.123 "zoned": false, 00:14:18.123 "supported_io_types": { 00:14:18.123 "read": true, 00:14:18.123 "write": true, 00:14:18.123 "unmap": true, 00:14:18.123 "flush": true, 00:14:18.123 "reset": true, 00:14:18.123 "nvme_admin": false, 00:14:18.123 "nvme_io": false, 00:14:18.123 "nvme_io_md": false, 00:14:18.123 "write_zeroes": true, 00:14:18.123 "zcopy": true, 00:14:18.123 "get_zone_info": false, 00:14:18.123 "zone_management": false, 00:14:18.123 "zone_append": false, 00:14:18.123 "compare": false, 00:14:18.123 "compare_and_write": false, 00:14:18.123 "abort": true, 00:14:18.123 "seek_hole": false, 00:14:18.123 "seek_data": false, 00:14:18.123 "copy": true, 00:14:18.123 "nvme_iov_md": false 00:14:18.123 }, 00:14:18.123 "memory_domains": [ 00:14:18.123 { 00:14:18.123 "dma_device_id": "system", 00:14:18.123 "dma_device_type": 1 00:14:18.123 }, 00:14:18.123 { 00:14:18.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.123 "dma_device_type": 2 00:14:18.123 } 00:14:18.123 ], 00:14:18.123 "driver_specific": {} 00:14:18.123 } 00:14:18.123 ] 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.123 "name": "Existed_Raid", 00:14:18.123 "uuid": "9084870c-b3cc-4be6-b451-1b98a4f1c76f", 00:14:18.123 "strip_size_kb": 64, 00:14:18.123 "state": "online", 00:14:18.123 "raid_level": "concat", 00:14:18.123 "superblock": false, 00:14:18.123 "num_base_bdevs": 4, 00:14:18.123 "num_base_bdevs_discovered": 4, 00:14:18.123 "num_base_bdevs_operational": 4, 00:14:18.123 "base_bdevs_list": [ 00:14:18.123 { 00:14:18.123 "name": "NewBaseBdev", 00:14:18.123 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:18.123 "is_configured": true, 00:14:18.123 "data_offset": 0, 00:14:18.123 "data_size": 65536 00:14:18.123 }, 00:14:18.123 { 00:14:18.123 "name": "BaseBdev2", 00:14:18.123 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:18.123 "is_configured": true, 00:14:18.123 "data_offset": 0, 00:14:18.123 "data_size": 65536 00:14:18.123 }, 00:14:18.123 { 00:14:18.123 "name": "BaseBdev3", 00:14:18.123 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:18.123 "is_configured": true, 00:14:18.123 "data_offset": 0, 00:14:18.123 "data_size": 65536 00:14:18.123 }, 00:14:18.123 { 00:14:18.123 "name": "BaseBdev4", 00:14:18.123 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:18.123 "is_configured": true, 00:14:18.123 "data_offset": 0, 00:14:18.123 "data_size": 65536 00:14:18.123 } 00:14:18.123 ] 00:14:18.123 }' 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.123 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.690 [2024-11-26 20:27:11.944967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.690 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.690 "name": "Existed_Raid", 00:14:18.690 "aliases": [ 00:14:18.690 "9084870c-b3cc-4be6-b451-1b98a4f1c76f" 00:14:18.690 ], 00:14:18.690 "product_name": "Raid Volume", 00:14:18.690 "block_size": 512, 00:14:18.690 "num_blocks": 262144, 00:14:18.690 "uuid": "9084870c-b3cc-4be6-b451-1b98a4f1c76f", 00:14:18.690 "assigned_rate_limits": { 00:14:18.690 "rw_ios_per_sec": 0, 00:14:18.690 "rw_mbytes_per_sec": 0, 00:14:18.690 "r_mbytes_per_sec": 0, 00:14:18.690 "w_mbytes_per_sec": 0 00:14:18.690 }, 00:14:18.690 "claimed": false, 00:14:18.690 "zoned": false, 00:14:18.690 "supported_io_types": { 00:14:18.690 "read": true, 00:14:18.690 "write": true, 00:14:18.690 "unmap": true, 00:14:18.690 "flush": true, 00:14:18.690 "reset": true, 00:14:18.690 "nvme_admin": false, 00:14:18.690 "nvme_io": false, 00:14:18.690 "nvme_io_md": false, 00:14:18.690 "write_zeroes": true, 00:14:18.690 "zcopy": false, 00:14:18.690 "get_zone_info": false, 00:14:18.690 "zone_management": false, 00:14:18.690 "zone_append": false, 00:14:18.690 "compare": false, 00:14:18.690 "compare_and_write": false, 00:14:18.690 "abort": false, 00:14:18.690 "seek_hole": false, 00:14:18.690 "seek_data": false, 00:14:18.690 "copy": false, 00:14:18.690 "nvme_iov_md": false 00:14:18.690 }, 00:14:18.690 "memory_domains": [ 00:14:18.690 { 00:14:18.690 "dma_device_id": "system", 00:14:18.690 "dma_device_type": 1 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.690 "dma_device_type": 2 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "system", 00:14:18.690 "dma_device_type": 1 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.690 "dma_device_type": 2 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "system", 00:14:18.690 "dma_device_type": 1 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.690 "dma_device_type": 2 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "system", 00:14:18.690 "dma_device_type": 1 00:14:18.690 }, 00:14:18.690 { 00:14:18.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.690 "dma_device_type": 2 00:14:18.690 } 00:14:18.690 ], 00:14:18.690 "driver_specific": { 00:14:18.690 "raid": { 00:14:18.690 "uuid": "9084870c-b3cc-4be6-b451-1b98a4f1c76f", 00:14:18.690 "strip_size_kb": 64, 00:14:18.690 "state": "online", 00:14:18.690 "raid_level": "concat", 00:14:18.690 "superblock": false, 00:14:18.690 "num_base_bdevs": 4, 00:14:18.690 "num_base_bdevs_discovered": 4, 00:14:18.690 "num_base_bdevs_operational": 4, 00:14:18.690 "base_bdevs_list": [ 00:14:18.690 { 00:14:18.690 "name": "NewBaseBdev", 00:14:18.690 "uuid": "4c04961a-0f3e-4d18-973a-8289d67190f4", 00:14:18.691 "is_configured": true, 00:14:18.691 "data_offset": 0, 00:14:18.691 "data_size": 65536 00:14:18.691 }, 00:14:18.691 { 00:14:18.691 "name": "BaseBdev2", 00:14:18.691 "uuid": "124211b9-c228-4c41-b4e9-13f3cf298a5d", 00:14:18.691 "is_configured": true, 00:14:18.691 "data_offset": 0, 00:14:18.691 "data_size": 65536 00:14:18.691 }, 00:14:18.691 { 00:14:18.691 "name": "BaseBdev3", 00:14:18.691 "uuid": "5320c003-00a3-461d-9859-63adac59a7d1", 00:14:18.691 "is_configured": true, 00:14:18.691 "data_offset": 0, 00:14:18.691 "data_size": 65536 00:14:18.691 }, 00:14:18.691 { 00:14:18.691 "name": "BaseBdev4", 00:14:18.691 "uuid": "1113e136-883b-40d6-b2b2-8a44cfc5a8cb", 00:14:18.691 "is_configured": true, 00:14:18.691 "data_offset": 0, 00:14:18.691 "data_size": 65536 00:14:18.691 } 00:14:18.691 ] 00:14:18.691 } 00:14:18.691 } 00:14:18.691 }' 00:14:18.691 20:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:18.691 BaseBdev2 00:14:18.691 BaseBdev3 00:14:18.691 BaseBdev4' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.691 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.970 [2024-11-26 20:27:12.295957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.970 [2024-11-26 20:27:12.295995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.970 [2024-11-26 20:27:12.296096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.970 [2024-11-26 20:27:12.296171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.970 [2024-11-26 20:27:12.296182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71610 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71610 ']' 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71610 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71610 00:14:18.970 killing process with pid 71610 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71610' 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71610 00:14:18.970 [2024-11-26 20:27:12.335456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.970 20:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71610 00:14:19.228 [2024-11-26 20:27:12.776356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:20.602 ************************************ 00:14:20.602 END TEST raid_state_function_test 00:14:20.602 ************************************ 00:14:20.602 00:14:20.602 real 0m12.217s 00:14:20.602 user 0m19.337s 00:14:20.602 sys 0m2.060s 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.602 20:27:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:20.602 20:27:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:20.602 20:27:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.602 20:27:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.602 ************************************ 00:14:20.602 START TEST raid_state_function_test_sb 00:14:20.602 ************************************ 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:20.602 Process raid pid: 72287 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72287 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72287' 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72287 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72287 ']' 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.602 20:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.860 [2024-11-26 20:27:14.173565] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:20.860 [2024-11-26 20:27:14.173788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.860 [2024-11-26 20:27:14.351672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.119 [2024-11-26 20:27:14.474432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.377 [2024-11-26 20:27:14.692303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.377 [2024-11-26 20:27:14.692340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.636 [2024-11-26 20:27:15.043499] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.636 [2024-11-26 20:27:15.043566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.636 [2024-11-26 20:27:15.043578] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.636 [2024-11-26 20:27:15.043590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.636 [2024-11-26 20:27:15.043603] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.636 [2024-11-26 20:27:15.043613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.636 [2024-11-26 20:27:15.043620] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:21.636 [2024-11-26 20:27:15.043629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.636 "name": "Existed_Raid", 00:14:21.636 "uuid": "da48931b-b28f-4d24-928d-7770191b1af2", 00:14:21.636 "strip_size_kb": 64, 00:14:21.636 "state": "configuring", 00:14:21.636 "raid_level": "concat", 00:14:21.636 "superblock": true, 00:14:21.636 "num_base_bdevs": 4, 00:14:21.636 "num_base_bdevs_discovered": 0, 00:14:21.636 "num_base_bdevs_operational": 4, 00:14:21.636 "base_bdevs_list": [ 00:14:21.636 { 00:14:21.636 "name": "BaseBdev1", 00:14:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.636 "is_configured": false, 00:14:21.636 "data_offset": 0, 00:14:21.636 "data_size": 0 00:14:21.636 }, 00:14:21.636 { 00:14:21.636 "name": "BaseBdev2", 00:14:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.636 "is_configured": false, 00:14:21.636 "data_offset": 0, 00:14:21.636 "data_size": 0 00:14:21.636 }, 00:14:21.636 { 00:14:21.636 "name": "BaseBdev3", 00:14:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.636 "is_configured": false, 00:14:21.636 "data_offset": 0, 00:14:21.636 "data_size": 0 00:14:21.636 }, 00:14:21.636 { 00:14:21.636 "name": "BaseBdev4", 00:14:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.636 "is_configured": false, 00:14:21.636 "data_offset": 0, 00:14:21.636 "data_size": 0 00:14:21.636 } 00:14:21.636 ] 00:14:21.636 }' 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.636 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 [2024-11-26 20:27:15.534620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.204 [2024-11-26 20:27:15.534745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 [2024-11-26 20:27:15.542607] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.204 [2024-11-26 20:27:15.542657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.204 [2024-11-26 20:27:15.542668] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.204 [2024-11-26 20:27:15.542679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.204 [2024-11-26 20:27:15.542686] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.204 [2024-11-26 20:27:15.542696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.204 [2024-11-26 20:27:15.542704] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.204 [2024-11-26 20:27:15.542713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 [2024-11-26 20:27:15.594361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.204 BaseBdev1 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 [ 00:14:22.204 { 00:14:22.204 "name": "BaseBdev1", 00:14:22.204 "aliases": [ 00:14:22.204 "054db7f7-7d85-4213-92c4-b11274ab2eb6" 00:14:22.204 ], 00:14:22.204 "product_name": "Malloc disk", 00:14:22.204 "block_size": 512, 00:14:22.204 "num_blocks": 65536, 00:14:22.204 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:22.204 "assigned_rate_limits": { 00:14:22.204 "rw_ios_per_sec": 0, 00:14:22.204 "rw_mbytes_per_sec": 0, 00:14:22.204 "r_mbytes_per_sec": 0, 00:14:22.204 "w_mbytes_per_sec": 0 00:14:22.204 }, 00:14:22.204 "claimed": true, 00:14:22.204 "claim_type": "exclusive_write", 00:14:22.204 "zoned": false, 00:14:22.204 "supported_io_types": { 00:14:22.204 "read": true, 00:14:22.204 "write": true, 00:14:22.204 "unmap": true, 00:14:22.204 "flush": true, 00:14:22.204 "reset": true, 00:14:22.204 "nvme_admin": false, 00:14:22.204 "nvme_io": false, 00:14:22.204 "nvme_io_md": false, 00:14:22.204 "write_zeroes": true, 00:14:22.204 "zcopy": true, 00:14:22.204 "get_zone_info": false, 00:14:22.204 "zone_management": false, 00:14:22.204 "zone_append": false, 00:14:22.204 "compare": false, 00:14:22.204 "compare_and_write": false, 00:14:22.204 "abort": true, 00:14:22.204 "seek_hole": false, 00:14:22.204 "seek_data": false, 00:14:22.204 "copy": true, 00:14:22.204 "nvme_iov_md": false 00:14:22.204 }, 00:14:22.204 "memory_domains": [ 00:14:22.204 { 00:14:22.204 "dma_device_id": "system", 00:14:22.204 "dma_device_type": 1 00:14:22.204 }, 00:14:22.204 { 00:14:22.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.204 "dma_device_type": 2 00:14:22.204 } 00:14:22.204 ], 00:14:22.204 "driver_specific": {} 00:14:22.204 } 00:14:22.204 ] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.204 "name": "Existed_Raid", 00:14:22.204 "uuid": "adc15bc8-c9a0-4653-b729-1e1325c40de6", 00:14:22.204 "strip_size_kb": 64, 00:14:22.204 "state": "configuring", 00:14:22.204 "raid_level": "concat", 00:14:22.204 "superblock": true, 00:14:22.204 "num_base_bdevs": 4, 00:14:22.204 "num_base_bdevs_discovered": 1, 00:14:22.204 "num_base_bdevs_operational": 4, 00:14:22.204 "base_bdevs_list": [ 00:14:22.204 { 00:14:22.204 "name": "BaseBdev1", 00:14:22.204 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:22.204 "is_configured": true, 00:14:22.204 "data_offset": 2048, 00:14:22.204 "data_size": 63488 00:14:22.204 }, 00:14:22.204 { 00:14:22.204 "name": "BaseBdev2", 00:14:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.205 "is_configured": false, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 0 00:14:22.205 }, 00:14:22.205 { 00:14:22.205 "name": "BaseBdev3", 00:14:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.205 "is_configured": false, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 0 00:14:22.205 }, 00:14:22.205 { 00:14:22.205 "name": "BaseBdev4", 00:14:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.205 "is_configured": false, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 0 00:14:22.205 } 00:14:22.205 ] 00:14:22.205 }' 00:14:22.205 20:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.205 20:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.773 [2024-11-26 20:27:16.105594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.773 [2024-11-26 20:27:16.105730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.773 [2024-11-26 20:27:16.117648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.773 [2024-11-26 20:27:16.119745] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.773 [2024-11-26 20:27:16.119862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.773 [2024-11-26 20:27:16.119880] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.773 [2024-11-26 20:27:16.119894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.773 [2024-11-26 20:27:16.119903] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.773 [2024-11-26 20:27:16.119914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.773 "name": "Existed_Raid", 00:14:22.773 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:22.773 "strip_size_kb": 64, 00:14:22.773 "state": "configuring", 00:14:22.773 "raid_level": "concat", 00:14:22.773 "superblock": true, 00:14:22.773 "num_base_bdevs": 4, 00:14:22.773 "num_base_bdevs_discovered": 1, 00:14:22.773 "num_base_bdevs_operational": 4, 00:14:22.773 "base_bdevs_list": [ 00:14:22.773 { 00:14:22.773 "name": "BaseBdev1", 00:14:22.773 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:22.773 "is_configured": true, 00:14:22.773 "data_offset": 2048, 00:14:22.773 "data_size": 63488 00:14:22.773 }, 00:14:22.773 { 00:14:22.773 "name": "BaseBdev2", 00:14:22.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.773 "is_configured": false, 00:14:22.773 "data_offset": 0, 00:14:22.773 "data_size": 0 00:14:22.773 }, 00:14:22.773 { 00:14:22.773 "name": "BaseBdev3", 00:14:22.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.773 "is_configured": false, 00:14:22.773 "data_offset": 0, 00:14:22.773 "data_size": 0 00:14:22.773 }, 00:14:22.773 { 00:14:22.773 "name": "BaseBdev4", 00:14:22.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.773 "is_configured": false, 00:14:22.773 "data_offset": 0, 00:14:22.773 "data_size": 0 00:14:22.773 } 00:14:22.773 ] 00:14:22.773 }' 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.773 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 [2024-11-26 20:27:16.659032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.341 BaseBdev2 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 [ 00:14:23.341 { 00:14:23.341 "name": "BaseBdev2", 00:14:23.341 "aliases": [ 00:14:23.341 "c777d2fa-9daf-4ea2-8d61-c026abf032f5" 00:14:23.341 ], 00:14:23.341 "product_name": "Malloc disk", 00:14:23.341 "block_size": 512, 00:14:23.341 "num_blocks": 65536, 00:14:23.341 "uuid": "c777d2fa-9daf-4ea2-8d61-c026abf032f5", 00:14:23.341 "assigned_rate_limits": { 00:14:23.341 "rw_ios_per_sec": 0, 00:14:23.341 "rw_mbytes_per_sec": 0, 00:14:23.341 "r_mbytes_per_sec": 0, 00:14:23.341 "w_mbytes_per_sec": 0 00:14:23.341 }, 00:14:23.341 "claimed": true, 00:14:23.341 "claim_type": "exclusive_write", 00:14:23.341 "zoned": false, 00:14:23.341 "supported_io_types": { 00:14:23.341 "read": true, 00:14:23.341 "write": true, 00:14:23.341 "unmap": true, 00:14:23.341 "flush": true, 00:14:23.341 "reset": true, 00:14:23.341 "nvme_admin": false, 00:14:23.341 "nvme_io": false, 00:14:23.341 "nvme_io_md": false, 00:14:23.341 "write_zeroes": true, 00:14:23.341 "zcopy": true, 00:14:23.341 "get_zone_info": false, 00:14:23.341 "zone_management": false, 00:14:23.341 "zone_append": false, 00:14:23.341 "compare": false, 00:14:23.341 "compare_and_write": false, 00:14:23.341 "abort": true, 00:14:23.341 "seek_hole": false, 00:14:23.341 "seek_data": false, 00:14:23.341 "copy": true, 00:14:23.341 "nvme_iov_md": false 00:14:23.341 }, 00:14:23.341 "memory_domains": [ 00:14:23.341 { 00:14:23.341 "dma_device_id": "system", 00:14:23.341 "dma_device_type": 1 00:14:23.341 }, 00:14:23.341 { 00:14:23.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.341 "dma_device_type": 2 00:14:23.341 } 00:14:23.341 ], 00:14:23.341 "driver_specific": {} 00:14:23.341 } 00:14:23.341 ] 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.341 "name": "Existed_Raid", 00:14:23.341 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:23.341 "strip_size_kb": 64, 00:14:23.341 "state": "configuring", 00:14:23.341 "raid_level": "concat", 00:14:23.341 "superblock": true, 00:14:23.341 "num_base_bdevs": 4, 00:14:23.341 "num_base_bdevs_discovered": 2, 00:14:23.341 "num_base_bdevs_operational": 4, 00:14:23.341 "base_bdevs_list": [ 00:14:23.341 { 00:14:23.341 "name": "BaseBdev1", 00:14:23.341 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:23.341 "is_configured": true, 00:14:23.341 "data_offset": 2048, 00:14:23.341 "data_size": 63488 00:14:23.341 }, 00:14:23.341 { 00:14:23.341 "name": "BaseBdev2", 00:14:23.341 "uuid": "c777d2fa-9daf-4ea2-8d61-c026abf032f5", 00:14:23.341 "is_configured": true, 00:14:23.341 "data_offset": 2048, 00:14:23.341 "data_size": 63488 00:14:23.341 }, 00:14:23.341 { 00:14:23.341 "name": "BaseBdev3", 00:14:23.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.341 "is_configured": false, 00:14:23.341 "data_offset": 0, 00:14:23.341 "data_size": 0 00:14:23.341 }, 00:14:23.341 { 00:14:23.341 "name": "BaseBdev4", 00:14:23.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.341 "is_configured": false, 00:14:23.341 "data_offset": 0, 00:14:23.341 "data_size": 0 00:14:23.341 } 00:14:23.341 ] 00:14:23.341 }' 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.341 20:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.910 [2024-11-26 20:27:17.229715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.910 BaseBdev3 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.910 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.910 [ 00:14:23.910 { 00:14:23.910 "name": "BaseBdev3", 00:14:23.910 "aliases": [ 00:14:23.910 "aea5cb11-d5b4-4071-9783-59ca0cb4dddc" 00:14:23.910 ], 00:14:23.910 "product_name": "Malloc disk", 00:14:23.910 "block_size": 512, 00:14:23.910 "num_blocks": 65536, 00:14:23.910 "uuid": "aea5cb11-d5b4-4071-9783-59ca0cb4dddc", 00:14:23.910 "assigned_rate_limits": { 00:14:23.910 "rw_ios_per_sec": 0, 00:14:23.910 "rw_mbytes_per_sec": 0, 00:14:23.910 "r_mbytes_per_sec": 0, 00:14:23.910 "w_mbytes_per_sec": 0 00:14:23.910 }, 00:14:23.910 "claimed": true, 00:14:23.910 "claim_type": "exclusive_write", 00:14:23.910 "zoned": false, 00:14:23.910 "supported_io_types": { 00:14:23.910 "read": true, 00:14:23.910 "write": true, 00:14:23.910 "unmap": true, 00:14:23.910 "flush": true, 00:14:23.910 "reset": true, 00:14:23.911 "nvme_admin": false, 00:14:23.911 "nvme_io": false, 00:14:23.911 "nvme_io_md": false, 00:14:23.911 "write_zeroes": true, 00:14:23.911 "zcopy": true, 00:14:23.911 "get_zone_info": false, 00:14:23.911 "zone_management": false, 00:14:23.911 "zone_append": false, 00:14:23.911 "compare": false, 00:14:23.911 "compare_and_write": false, 00:14:23.911 "abort": true, 00:14:23.911 "seek_hole": false, 00:14:23.911 "seek_data": false, 00:14:23.911 "copy": true, 00:14:23.911 "nvme_iov_md": false 00:14:23.911 }, 00:14:23.911 "memory_domains": [ 00:14:23.911 { 00:14:23.911 "dma_device_id": "system", 00:14:23.911 "dma_device_type": 1 00:14:23.911 }, 00:14:23.911 { 00:14:23.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.911 "dma_device_type": 2 00:14:23.911 } 00:14:23.911 ], 00:14:23.911 "driver_specific": {} 00:14:23.911 } 00:14:23.911 ] 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.911 "name": "Existed_Raid", 00:14:23.911 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:23.911 "strip_size_kb": 64, 00:14:23.911 "state": "configuring", 00:14:23.911 "raid_level": "concat", 00:14:23.911 "superblock": true, 00:14:23.911 "num_base_bdevs": 4, 00:14:23.911 "num_base_bdevs_discovered": 3, 00:14:23.911 "num_base_bdevs_operational": 4, 00:14:23.911 "base_bdevs_list": [ 00:14:23.911 { 00:14:23.911 "name": "BaseBdev1", 00:14:23.911 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:23.911 "is_configured": true, 00:14:23.911 "data_offset": 2048, 00:14:23.911 "data_size": 63488 00:14:23.911 }, 00:14:23.911 { 00:14:23.911 "name": "BaseBdev2", 00:14:23.911 "uuid": "c777d2fa-9daf-4ea2-8d61-c026abf032f5", 00:14:23.911 "is_configured": true, 00:14:23.911 "data_offset": 2048, 00:14:23.911 "data_size": 63488 00:14:23.911 }, 00:14:23.911 { 00:14:23.911 "name": "BaseBdev3", 00:14:23.911 "uuid": "aea5cb11-d5b4-4071-9783-59ca0cb4dddc", 00:14:23.911 "is_configured": true, 00:14:23.911 "data_offset": 2048, 00:14:23.911 "data_size": 63488 00:14:23.911 }, 00:14:23.911 { 00:14:23.911 "name": "BaseBdev4", 00:14:23.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.911 "is_configured": false, 00:14:23.911 "data_offset": 0, 00:14:23.911 "data_size": 0 00:14:23.911 } 00:14:23.911 ] 00:14:23.911 }' 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.911 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.481 [2024-11-26 20:27:17.784903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.481 [2024-11-26 20:27:17.785350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:24.481 [2024-11-26 20:27:17.785415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:24.481 BaseBdev4 00:14:24.481 [2024-11-26 20:27:17.785785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:24.481 [2024-11-26 20:27:17.785974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:24.481 [2024-11-26 20:27:17.786037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:24.481 [2024-11-26 20:27:17.786261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.481 [ 00:14:24.481 { 00:14:24.481 "name": "BaseBdev4", 00:14:24.481 "aliases": [ 00:14:24.481 "157f168e-c496-4782-b29f-fba5f70c0341" 00:14:24.481 ], 00:14:24.481 "product_name": "Malloc disk", 00:14:24.481 "block_size": 512, 00:14:24.481 "num_blocks": 65536, 00:14:24.481 "uuid": "157f168e-c496-4782-b29f-fba5f70c0341", 00:14:24.481 "assigned_rate_limits": { 00:14:24.481 "rw_ios_per_sec": 0, 00:14:24.481 "rw_mbytes_per_sec": 0, 00:14:24.481 "r_mbytes_per_sec": 0, 00:14:24.481 "w_mbytes_per_sec": 0 00:14:24.481 }, 00:14:24.481 "claimed": true, 00:14:24.481 "claim_type": "exclusive_write", 00:14:24.481 "zoned": false, 00:14:24.481 "supported_io_types": { 00:14:24.481 "read": true, 00:14:24.481 "write": true, 00:14:24.481 "unmap": true, 00:14:24.481 "flush": true, 00:14:24.481 "reset": true, 00:14:24.481 "nvme_admin": false, 00:14:24.481 "nvme_io": false, 00:14:24.481 "nvme_io_md": false, 00:14:24.481 "write_zeroes": true, 00:14:24.481 "zcopy": true, 00:14:24.481 "get_zone_info": false, 00:14:24.481 "zone_management": false, 00:14:24.481 "zone_append": false, 00:14:24.481 "compare": false, 00:14:24.481 "compare_and_write": false, 00:14:24.481 "abort": true, 00:14:24.481 "seek_hole": false, 00:14:24.481 "seek_data": false, 00:14:24.481 "copy": true, 00:14:24.481 "nvme_iov_md": false 00:14:24.481 }, 00:14:24.481 "memory_domains": [ 00:14:24.481 { 00:14:24.481 "dma_device_id": "system", 00:14:24.481 "dma_device_type": 1 00:14:24.481 }, 00:14:24.481 { 00:14:24.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.481 "dma_device_type": 2 00:14:24.481 } 00:14:24.481 ], 00:14:24.481 "driver_specific": {} 00:14:24.481 } 00:14:24.481 ] 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.481 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.482 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.482 "name": "Existed_Raid", 00:14:24.482 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:24.482 "strip_size_kb": 64, 00:14:24.482 "state": "online", 00:14:24.482 "raid_level": "concat", 00:14:24.482 "superblock": true, 00:14:24.482 "num_base_bdevs": 4, 00:14:24.482 "num_base_bdevs_discovered": 4, 00:14:24.482 "num_base_bdevs_operational": 4, 00:14:24.482 "base_bdevs_list": [ 00:14:24.482 { 00:14:24.482 "name": "BaseBdev1", 00:14:24.482 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:24.482 "is_configured": true, 00:14:24.482 "data_offset": 2048, 00:14:24.482 "data_size": 63488 00:14:24.482 }, 00:14:24.482 { 00:14:24.482 "name": "BaseBdev2", 00:14:24.482 "uuid": "c777d2fa-9daf-4ea2-8d61-c026abf032f5", 00:14:24.482 "is_configured": true, 00:14:24.482 "data_offset": 2048, 00:14:24.482 "data_size": 63488 00:14:24.482 }, 00:14:24.482 { 00:14:24.482 "name": "BaseBdev3", 00:14:24.482 "uuid": "aea5cb11-d5b4-4071-9783-59ca0cb4dddc", 00:14:24.482 "is_configured": true, 00:14:24.482 "data_offset": 2048, 00:14:24.482 "data_size": 63488 00:14:24.482 }, 00:14:24.482 { 00:14:24.482 "name": "BaseBdev4", 00:14:24.482 "uuid": "157f168e-c496-4782-b29f-fba5f70c0341", 00:14:24.482 "is_configured": true, 00:14:24.482 "data_offset": 2048, 00:14:24.482 "data_size": 63488 00:14:24.482 } 00:14:24.482 ] 00:14:24.482 }' 00:14:24.482 20:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.482 20:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.741 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.741 [2024-11-26 20:27:18.292665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.002 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.002 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.002 "name": "Existed_Raid", 00:14:25.002 "aliases": [ 00:14:25.002 "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9" 00:14:25.002 ], 00:14:25.002 "product_name": "Raid Volume", 00:14:25.002 "block_size": 512, 00:14:25.002 "num_blocks": 253952, 00:14:25.002 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:25.002 "assigned_rate_limits": { 00:14:25.002 "rw_ios_per_sec": 0, 00:14:25.002 "rw_mbytes_per_sec": 0, 00:14:25.002 "r_mbytes_per_sec": 0, 00:14:25.002 "w_mbytes_per_sec": 0 00:14:25.002 }, 00:14:25.002 "claimed": false, 00:14:25.002 "zoned": false, 00:14:25.002 "supported_io_types": { 00:14:25.002 "read": true, 00:14:25.002 "write": true, 00:14:25.002 "unmap": true, 00:14:25.002 "flush": true, 00:14:25.002 "reset": true, 00:14:25.002 "nvme_admin": false, 00:14:25.002 "nvme_io": false, 00:14:25.002 "nvme_io_md": false, 00:14:25.002 "write_zeroes": true, 00:14:25.002 "zcopy": false, 00:14:25.002 "get_zone_info": false, 00:14:25.002 "zone_management": false, 00:14:25.002 "zone_append": false, 00:14:25.002 "compare": false, 00:14:25.002 "compare_and_write": false, 00:14:25.002 "abort": false, 00:14:25.002 "seek_hole": false, 00:14:25.002 "seek_data": false, 00:14:25.002 "copy": false, 00:14:25.002 "nvme_iov_md": false 00:14:25.002 }, 00:14:25.002 "memory_domains": [ 00:14:25.002 { 00:14:25.002 "dma_device_id": "system", 00:14:25.002 "dma_device_type": 1 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.002 "dma_device_type": 2 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "system", 00:14:25.002 "dma_device_type": 1 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.002 "dma_device_type": 2 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "system", 00:14:25.002 "dma_device_type": 1 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.002 "dma_device_type": 2 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "system", 00:14:25.002 "dma_device_type": 1 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.002 "dma_device_type": 2 00:14:25.002 } 00:14:25.002 ], 00:14:25.002 "driver_specific": { 00:14:25.002 "raid": { 00:14:25.002 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:25.002 "strip_size_kb": 64, 00:14:25.002 "state": "online", 00:14:25.002 "raid_level": "concat", 00:14:25.002 "superblock": true, 00:14:25.002 "num_base_bdevs": 4, 00:14:25.002 "num_base_bdevs_discovered": 4, 00:14:25.002 "num_base_bdevs_operational": 4, 00:14:25.002 "base_bdevs_list": [ 00:14:25.002 { 00:14:25.002 "name": "BaseBdev1", 00:14:25.002 "uuid": "054db7f7-7d85-4213-92c4-b11274ab2eb6", 00:14:25.002 "is_configured": true, 00:14:25.002 "data_offset": 2048, 00:14:25.002 "data_size": 63488 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "name": "BaseBdev2", 00:14:25.002 "uuid": "c777d2fa-9daf-4ea2-8d61-c026abf032f5", 00:14:25.002 "is_configured": true, 00:14:25.002 "data_offset": 2048, 00:14:25.002 "data_size": 63488 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "name": "BaseBdev3", 00:14:25.002 "uuid": "aea5cb11-d5b4-4071-9783-59ca0cb4dddc", 00:14:25.002 "is_configured": true, 00:14:25.002 "data_offset": 2048, 00:14:25.002 "data_size": 63488 00:14:25.002 }, 00:14:25.002 { 00:14:25.002 "name": "BaseBdev4", 00:14:25.002 "uuid": "157f168e-c496-4782-b29f-fba5f70c0341", 00:14:25.002 "is_configured": true, 00:14:25.002 "data_offset": 2048, 00:14:25.002 "data_size": 63488 00:14:25.002 } 00:14:25.002 ] 00:14:25.002 } 00:14:25.002 } 00:14:25.003 }' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:25.003 BaseBdev2 00:14:25.003 BaseBdev3 00:14:25.003 BaseBdev4' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.003 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.263 [2024-11-26 20:27:18.611748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.263 [2024-11-26 20:27:18.611786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.263 [2024-11-26 20:27:18.611843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.263 "name": "Existed_Raid", 00:14:25.263 "uuid": "b57a4a8d-873f-43ee-b4a5-f0d7c223d6e9", 00:14:25.263 "strip_size_kb": 64, 00:14:25.263 "state": "offline", 00:14:25.263 "raid_level": "concat", 00:14:25.263 "superblock": true, 00:14:25.263 "num_base_bdevs": 4, 00:14:25.263 "num_base_bdevs_discovered": 3, 00:14:25.263 "num_base_bdevs_operational": 3, 00:14:25.263 "base_bdevs_list": [ 00:14:25.263 { 00:14:25.263 "name": null, 00:14:25.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.263 "is_configured": false, 00:14:25.263 "data_offset": 0, 00:14:25.263 "data_size": 63488 00:14:25.263 }, 00:14:25.263 { 00:14:25.263 "name": "BaseBdev2", 00:14:25.263 "uuid": "c777d2fa-9daf-4ea2-8d61-c026abf032f5", 00:14:25.263 "is_configured": true, 00:14:25.263 "data_offset": 2048, 00:14:25.263 "data_size": 63488 00:14:25.263 }, 00:14:25.263 { 00:14:25.263 "name": "BaseBdev3", 00:14:25.263 "uuid": "aea5cb11-d5b4-4071-9783-59ca0cb4dddc", 00:14:25.263 "is_configured": true, 00:14:25.263 "data_offset": 2048, 00:14:25.263 "data_size": 63488 00:14:25.263 }, 00:14:25.263 { 00:14:25.263 "name": "BaseBdev4", 00:14:25.263 "uuid": "157f168e-c496-4782-b29f-fba5f70c0341", 00:14:25.263 "is_configured": true, 00:14:25.263 "data_offset": 2048, 00:14:25.263 "data_size": 63488 00:14:25.263 } 00:14:25.263 ] 00:14:25.263 }' 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.263 20:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.830 [2024-11-26 20:27:19.227975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.830 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.089 [2024-11-26 20:27:19.402289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.089 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.089 [2024-11-26 20:27:19.566961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:26.089 [2024-11-26 20:27:19.567103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 BaseBdev2 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 [ 00:14:26.348 { 00:14:26.348 "name": "BaseBdev2", 00:14:26.348 "aliases": [ 00:14:26.348 "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4" 00:14:26.348 ], 00:14:26.348 "product_name": "Malloc disk", 00:14:26.348 "block_size": 512, 00:14:26.348 "num_blocks": 65536, 00:14:26.348 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:26.348 "assigned_rate_limits": { 00:14:26.348 "rw_ios_per_sec": 0, 00:14:26.348 "rw_mbytes_per_sec": 0, 00:14:26.348 "r_mbytes_per_sec": 0, 00:14:26.348 "w_mbytes_per_sec": 0 00:14:26.348 }, 00:14:26.348 "claimed": false, 00:14:26.348 "zoned": false, 00:14:26.348 "supported_io_types": { 00:14:26.348 "read": true, 00:14:26.348 "write": true, 00:14:26.348 "unmap": true, 00:14:26.348 "flush": true, 00:14:26.348 "reset": true, 00:14:26.348 "nvme_admin": false, 00:14:26.348 "nvme_io": false, 00:14:26.348 "nvme_io_md": false, 00:14:26.348 "write_zeroes": true, 00:14:26.348 "zcopy": true, 00:14:26.348 "get_zone_info": false, 00:14:26.348 "zone_management": false, 00:14:26.348 "zone_append": false, 00:14:26.348 "compare": false, 00:14:26.348 "compare_and_write": false, 00:14:26.348 "abort": true, 00:14:26.348 "seek_hole": false, 00:14:26.348 "seek_data": false, 00:14:26.348 "copy": true, 00:14:26.348 "nvme_iov_md": false 00:14:26.348 }, 00:14:26.348 "memory_domains": [ 00:14:26.348 { 00:14:26.348 "dma_device_id": "system", 00:14:26.348 "dma_device_type": 1 00:14:26.348 }, 00:14:26.348 { 00:14:26.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.348 "dma_device_type": 2 00:14:26.348 } 00:14:26.348 ], 00:14:26.348 "driver_specific": {} 00:14:26.348 } 00:14:26.348 ] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 BaseBdev3 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.348 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.349 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.349 [ 00:14:26.349 { 00:14:26.349 "name": "BaseBdev3", 00:14:26.349 "aliases": [ 00:14:26.349 "dad378e0-a1ff-4b13-a32d-eb8499af0e97" 00:14:26.349 ], 00:14:26.349 "product_name": "Malloc disk", 00:14:26.349 "block_size": 512, 00:14:26.349 "num_blocks": 65536, 00:14:26.349 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:26.349 "assigned_rate_limits": { 00:14:26.349 "rw_ios_per_sec": 0, 00:14:26.349 "rw_mbytes_per_sec": 0, 00:14:26.349 "r_mbytes_per_sec": 0, 00:14:26.349 "w_mbytes_per_sec": 0 00:14:26.349 }, 00:14:26.349 "claimed": false, 00:14:26.349 "zoned": false, 00:14:26.349 "supported_io_types": { 00:14:26.349 "read": true, 00:14:26.349 "write": true, 00:14:26.349 "unmap": true, 00:14:26.349 "flush": true, 00:14:26.349 "reset": true, 00:14:26.349 "nvme_admin": false, 00:14:26.349 "nvme_io": false, 00:14:26.349 "nvme_io_md": false, 00:14:26.349 "write_zeroes": true, 00:14:26.349 "zcopy": true, 00:14:26.349 "get_zone_info": false, 00:14:26.349 "zone_management": false, 00:14:26.349 "zone_append": false, 00:14:26.349 "compare": false, 00:14:26.349 "compare_and_write": false, 00:14:26.349 "abort": true, 00:14:26.349 "seek_hole": false, 00:14:26.349 "seek_data": false, 00:14:26.349 "copy": true, 00:14:26.349 "nvme_iov_md": false 00:14:26.349 }, 00:14:26.349 "memory_domains": [ 00:14:26.349 { 00:14:26.349 "dma_device_id": "system", 00:14:26.349 "dma_device_type": 1 00:14:26.349 }, 00:14:26.349 { 00:14:26.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.349 "dma_device_type": 2 00:14:26.349 } 00:14:26.607 ], 00:14:26.607 "driver_specific": {} 00:14:26.607 } 00:14:26.607 ] 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.607 BaseBdev4 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.607 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.608 [ 00:14:26.608 { 00:14:26.608 "name": "BaseBdev4", 00:14:26.608 "aliases": [ 00:14:26.608 "8d470f98-dd3d-4fc9-9e25-18a96ac3c261" 00:14:26.608 ], 00:14:26.608 "product_name": "Malloc disk", 00:14:26.608 "block_size": 512, 00:14:26.608 "num_blocks": 65536, 00:14:26.608 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:26.608 "assigned_rate_limits": { 00:14:26.608 "rw_ios_per_sec": 0, 00:14:26.608 "rw_mbytes_per_sec": 0, 00:14:26.608 "r_mbytes_per_sec": 0, 00:14:26.608 "w_mbytes_per_sec": 0 00:14:26.608 }, 00:14:26.608 "claimed": false, 00:14:26.608 "zoned": false, 00:14:26.608 "supported_io_types": { 00:14:26.608 "read": true, 00:14:26.608 "write": true, 00:14:26.608 "unmap": true, 00:14:26.608 "flush": true, 00:14:26.608 "reset": true, 00:14:26.608 "nvme_admin": false, 00:14:26.608 "nvme_io": false, 00:14:26.608 "nvme_io_md": false, 00:14:26.608 "write_zeroes": true, 00:14:26.608 "zcopy": true, 00:14:26.608 "get_zone_info": false, 00:14:26.608 "zone_management": false, 00:14:26.608 "zone_append": false, 00:14:26.608 "compare": false, 00:14:26.608 "compare_and_write": false, 00:14:26.608 "abort": true, 00:14:26.608 "seek_hole": false, 00:14:26.608 "seek_data": false, 00:14:26.608 "copy": true, 00:14:26.608 "nvme_iov_md": false 00:14:26.608 }, 00:14:26.608 "memory_domains": [ 00:14:26.608 { 00:14:26.608 "dma_device_id": "system", 00:14:26.608 "dma_device_type": 1 00:14:26.608 }, 00:14:26.608 { 00:14:26.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.608 "dma_device_type": 2 00:14:26.608 } 00:14:26.608 ], 00:14:26.608 "driver_specific": {} 00:14:26.608 } 00:14:26.608 ] 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.608 [2024-11-26 20:27:19.989550] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.608 [2024-11-26 20:27:19.989664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.608 [2024-11-26 20:27:19.989725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.608 [2024-11-26 20:27:19.992036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.608 [2024-11-26 20:27:19.992141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.608 20:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.608 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.608 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.608 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.608 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.608 "name": "Existed_Raid", 00:14:26.608 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:26.608 "strip_size_kb": 64, 00:14:26.608 "state": "configuring", 00:14:26.608 "raid_level": "concat", 00:14:26.608 "superblock": true, 00:14:26.608 "num_base_bdevs": 4, 00:14:26.608 "num_base_bdevs_discovered": 3, 00:14:26.608 "num_base_bdevs_operational": 4, 00:14:26.608 "base_bdevs_list": [ 00:14:26.608 { 00:14:26.608 "name": "BaseBdev1", 00:14:26.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.608 "is_configured": false, 00:14:26.608 "data_offset": 0, 00:14:26.608 "data_size": 0 00:14:26.608 }, 00:14:26.608 { 00:14:26.608 "name": "BaseBdev2", 00:14:26.608 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:26.608 "is_configured": true, 00:14:26.608 "data_offset": 2048, 00:14:26.608 "data_size": 63488 00:14:26.608 }, 00:14:26.608 { 00:14:26.608 "name": "BaseBdev3", 00:14:26.608 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:26.608 "is_configured": true, 00:14:26.608 "data_offset": 2048, 00:14:26.608 "data_size": 63488 00:14:26.608 }, 00:14:26.608 { 00:14:26.608 "name": "BaseBdev4", 00:14:26.608 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:26.608 "is_configured": true, 00:14:26.608 "data_offset": 2048, 00:14:26.608 "data_size": 63488 00:14:26.608 } 00:14:26.608 ] 00:14:26.608 }' 00:14:26.608 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.608 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.175 [2024-11-26 20:27:20.480807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.175 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.175 "name": "Existed_Raid", 00:14:27.175 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:27.175 "strip_size_kb": 64, 00:14:27.175 "state": "configuring", 00:14:27.175 "raid_level": "concat", 00:14:27.175 "superblock": true, 00:14:27.175 "num_base_bdevs": 4, 00:14:27.175 "num_base_bdevs_discovered": 2, 00:14:27.175 "num_base_bdevs_operational": 4, 00:14:27.175 "base_bdevs_list": [ 00:14:27.175 { 00:14:27.175 "name": "BaseBdev1", 00:14:27.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.175 "is_configured": false, 00:14:27.175 "data_offset": 0, 00:14:27.175 "data_size": 0 00:14:27.175 }, 00:14:27.175 { 00:14:27.175 "name": null, 00:14:27.175 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:27.175 "is_configured": false, 00:14:27.175 "data_offset": 0, 00:14:27.175 "data_size": 63488 00:14:27.175 }, 00:14:27.175 { 00:14:27.175 "name": "BaseBdev3", 00:14:27.176 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:27.176 "is_configured": true, 00:14:27.176 "data_offset": 2048, 00:14:27.176 "data_size": 63488 00:14:27.176 }, 00:14:27.176 { 00:14:27.176 "name": "BaseBdev4", 00:14:27.176 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:27.176 "is_configured": true, 00:14:27.176 "data_offset": 2048, 00:14:27.176 "data_size": 63488 00:14:27.176 } 00:14:27.176 ] 00:14:27.176 }' 00:14:27.176 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.176 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.434 20:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 [2024-11-26 20:27:21.010841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.693 BaseBdev1 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 [ 00:14:27.693 { 00:14:27.693 "name": "BaseBdev1", 00:14:27.693 "aliases": [ 00:14:27.693 "b4f49b40-0064-43ed-97d9-7436e863f404" 00:14:27.693 ], 00:14:27.693 "product_name": "Malloc disk", 00:14:27.693 "block_size": 512, 00:14:27.693 "num_blocks": 65536, 00:14:27.693 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:27.693 "assigned_rate_limits": { 00:14:27.693 "rw_ios_per_sec": 0, 00:14:27.693 "rw_mbytes_per_sec": 0, 00:14:27.693 "r_mbytes_per_sec": 0, 00:14:27.693 "w_mbytes_per_sec": 0 00:14:27.693 }, 00:14:27.693 "claimed": true, 00:14:27.693 "claim_type": "exclusive_write", 00:14:27.693 "zoned": false, 00:14:27.693 "supported_io_types": { 00:14:27.693 "read": true, 00:14:27.693 "write": true, 00:14:27.693 "unmap": true, 00:14:27.693 "flush": true, 00:14:27.693 "reset": true, 00:14:27.693 "nvme_admin": false, 00:14:27.693 "nvme_io": false, 00:14:27.693 "nvme_io_md": false, 00:14:27.693 "write_zeroes": true, 00:14:27.693 "zcopy": true, 00:14:27.693 "get_zone_info": false, 00:14:27.693 "zone_management": false, 00:14:27.693 "zone_append": false, 00:14:27.693 "compare": false, 00:14:27.693 "compare_and_write": false, 00:14:27.693 "abort": true, 00:14:27.693 "seek_hole": false, 00:14:27.693 "seek_data": false, 00:14:27.693 "copy": true, 00:14:27.693 "nvme_iov_md": false 00:14:27.693 }, 00:14:27.693 "memory_domains": [ 00:14:27.693 { 00:14:27.693 "dma_device_id": "system", 00:14:27.693 "dma_device_type": 1 00:14:27.693 }, 00:14:27.693 { 00:14:27.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.693 "dma_device_type": 2 00:14:27.693 } 00:14:27.693 ], 00:14:27.693 "driver_specific": {} 00:14:27.693 } 00:14:27.693 ] 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.693 "name": "Existed_Raid", 00:14:27.693 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:27.693 "strip_size_kb": 64, 00:14:27.693 "state": "configuring", 00:14:27.693 "raid_level": "concat", 00:14:27.693 "superblock": true, 00:14:27.693 "num_base_bdevs": 4, 00:14:27.693 "num_base_bdevs_discovered": 3, 00:14:27.693 "num_base_bdevs_operational": 4, 00:14:27.693 "base_bdevs_list": [ 00:14:27.693 { 00:14:27.693 "name": "BaseBdev1", 00:14:27.693 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:27.693 "is_configured": true, 00:14:27.693 "data_offset": 2048, 00:14:27.693 "data_size": 63488 00:14:27.693 }, 00:14:27.693 { 00:14:27.693 "name": null, 00:14:27.693 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:27.693 "is_configured": false, 00:14:27.693 "data_offset": 0, 00:14:27.693 "data_size": 63488 00:14:27.693 }, 00:14:27.693 { 00:14:27.693 "name": "BaseBdev3", 00:14:27.693 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:27.693 "is_configured": true, 00:14:27.693 "data_offset": 2048, 00:14:27.693 "data_size": 63488 00:14:27.693 }, 00:14:27.693 { 00:14:27.693 "name": "BaseBdev4", 00:14:27.693 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:27.693 "is_configured": true, 00:14:27.693 "data_offset": 2048, 00:14:27.693 "data_size": 63488 00:14:27.693 } 00:14:27.693 ] 00:14:27.693 }' 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.693 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.258 [2024-11-26 20:27:21.578034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.258 "name": "Existed_Raid", 00:14:28.258 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:28.258 "strip_size_kb": 64, 00:14:28.258 "state": "configuring", 00:14:28.258 "raid_level": "concat", 00:14:28.258 "superblock": true, 00:14:28.258 "num_base_bdevs": 4, 00:14:28.258 "num_base_bdevs_discovered": 2, 00:14:28.258 "num_base_bdevs_operational": 4, 00:14:28.258 "base_bdevs_list": [ 00:14:28.258 { 00:14:28.258 "name": "BaseBdev1", 00:14:28.258 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:28.258 "is_configured": true, 00:14:28.258 "data_offset": 2048, 00:14:28.258 "data_size": 63488 00:14:28.258 }, 00:14:28.258 { 00:14:28.258 "name": null, 00:14:28.258 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:28.258 "is_configured": false, 00:14:28.258 "data_offset": 0, 00:14:28.258 "data_size": 63488 00:14:28.258 }, 00:14:28.258 { 00:14:28.258 "name": null, 00:14:28.258 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:28.258 "is_configured": false, 00:14:28.258 "data_offset": 0, 00:14:28.258 "data_size": 63488 00:14:28.258 }, 00:14:28.258 { 00:14:28.258 "name": "BaseBdev4", 00:14:28.258 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:28.258 "is_configured": true, 00:14:28.258 "data_offset": 2048, 00:14:28.258 "data_size": 63488 00:14:28.258 } 00:14:28.258 ] 00:14:28.258 }' 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.258 20:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.516 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.517 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.517 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.517 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:28.517 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.775 [2024-11-26 20:27:22.109160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.775 "name": "Existed_Raid", 00:14:28.775 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:28.775 "strip_size_kb": 64, 00:14:28.775 "state": "configuring", 00:14:28.775 "raid_level": "concat", 00:14:28.775 "superblock": true, 00:14:28.775 "num_base_bdevs": 4, 00:14:28.775 "num_base_bdevs_discovered": 3, 00:14:28.775 "num_base_bdevs_operational": 4, 00:14:28.775 "base_bdevs_list": [ 00:14:28.775 { 00:14:28.775 "name": "BaseBdev1", 00:14:28.775 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:28.775 "is_configured": true, 00:14:28.775 "data_offset": 2048, 00:14:28.775 "data_size": 63488 00:14:28.775 }, 00:14:28.775 { 00:14:28.775 "name": null, 00:14:28.775 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:28.775 "is_configured": false, 00:14:28.775 "data_offset": 0, 00:14:28.775 "data_size": 63488 00:14:28.775 }, 00:14:28.775 { 00:14:28.775 "name": "BaseBdev3", 00:14:28.775 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:28.775 "is_configured": true, 00:14:28.775 "data_offset": 2048, 00:14:28.775 "data_size": 63488 00:14:28.775 }, 00:14:28.775 { 00:14:28.775 "name": "BaseBdev4", 00:14:28.775 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:28.775 "is_configured": true, 00:14:28.775 "data_offset": 2048, 00:14:28.775 "data_size": 63488 00:14:28.775 } 00:14:28.775 ] 00:14:28.775 }' 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.775 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.033 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.033 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:29.033 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.033 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.033 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.292 [2024-11-26 20:27:22.612474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.292 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.293 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.293 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.293 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.293 "name": "Existed_Raid", 00:14:29.293 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:29.293 "strip_size_kb": 64, 00:14:29.293 "state": "configuring", 00:14:29.293 "raid_level": "concat", 00:14:29.293 "superblock": true, 00:14:29.293 "num_base_bdevs": 4, 00:14:29.293 "num_base_bdevs_discovered": 2, 00:14:29.293 "num_base_bdevs_operational": 4, 00:14:29.293 "base_bdevs_list": [ 00:14:29.293 { 00:14:29.293 "name": null, 00:14:29.293 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:29.293 "is_configured": false, 00:14:29.293 "data_offset": 0, 00:14:29.293 "data_size": 63488 00:14:29.293 }, 00:14:29.293 { 00:14:29.293 "name": null, 00:14:29.293 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:29.293 "is_configured": false, 00:14:29.293 "data_offset": 0, 00:14:29.293 "data_size": 63488 00:14:29.293 }, 00:14:29.293 { 00:14:29.293 "name": "BaseBdev3", 00:14:29.293 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:29.293 "is_configured": true, 00:14:29.293 "data_offset": 2048, 00:14:29.293 "data_size": 63488 00:14:29.293 }, 00:14:29.293 { 00:14:29.293 "name": "BaseBdev4", 00:14:29.293 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:29.293 "is_configured": true, 00:14:29.293 "data_offset": 2048, 00:14:29.293 "data_size": 63488 00:14:29.293 } 00:14:29.293 ] 00:14:29.293 }' 00:14:29.293 20:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.293 20:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 [2024-11-26 20:27:23.249300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.862 "name": "Existed_Raid", 00:14:29.862 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:29.862 "strip_size_kb": 64, 00:14:29.862 "state": "configuring", 00:14:29.862 "raid_level": "concat", 00:14:29.862 "superblock": true, 00:14:29.862 "num_base_bdevs": 4, 00:14:29.862 "num_base_bdevs_discovered": 3, 00:14:29.862 "num_base_bdevs_operational": 4, 00:14:29.862 "base_bdevs_list": [ 00:14:29.862 { 00:14:29.862 "name": null, 00:14:29.862 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:29.862 "is_configured": false, 00:14:29.862 "data_offset": 0, 00:14:29.862 "data_size": 63488 00:14:29.862 }, 00:14:29.862 { 00:14:29.862 "name": "BaseBdev2", 00:14:29.862 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:29.862 "is_configured": true, 00:14:29.862 "data_offset": 2048, 00:14:29.862 "data_size": 63488 00:14:29.862 }, 00:14:29.862 { 00:14:29.862 "name": "BaseBdev3", 00:14:29.862 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:29.862 "is_configured": true, 00:14:29.862 "data_offset": 2048, 00:14:29.862 "data_size": 63488 00:14:29.862 }, 00:14:29.862 { 00:14:29.862 "name": "BaseBdev4", 00:14:29.862 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:29.862 "is_configured": true, 00:14:29.862 "data_offset": 2048, 00:14:29.862 "data_size": 63488 00:14:29.862 } 00:14:29.862 ] 00:14:29.862 }' 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.862 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b4f49b40-0064-43ed-97d9-7436e863f404 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 [2024-11-26 20:27:23.879143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:30.432 [2024-11-26 20:27:23.879417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:30.432 [2024-11-26 20:27:23.879431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.432 [2024-11-26 20:27:23.879714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:30.432 NewBaseBdev 00:14:30.432 [2024-11-26 20:27:23.879878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:30.432 [2024-11-26 20:27:23.879898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:30.432 [2024-11-26 20:27:23.880048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.432 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 [ 00:14:30.432 { 00:14:30.432 "name": "NewBaseBdev", 00:14:30.432 "aliases": [ 00:14:30.432 "b4f49b40-0064-43ed-97d9-7436e863f404" 00:14:30.432 ], 00:14:30.432 "product_name": "Malloc disk", 00:14:30.432 "block_size": 512, 00:14:30.432 "num_blocks": 65536, 00:14:30.432 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:30.432 "assigned_rate_limits": { 00:14:30.432 "rw_ios_per_sec": 0, 00:14:30.432 "rw_mbytes_per_sec": 0, 00:14:30.432 "r_mbytes_per_sec": 0, 00:14:30.432 "w_mbytes_per_sec": 0 00:14:30.432 }, 00:14:30.432 "claimed": true, 00:14:30.432 "claim_type": "exclusive_write", 00:14:30.432 "zoned": false, 00:14:30.432 "supported_io_types": { 00:14:30.432 "read": true, 00:14:30.432 "write": true, 00:14:30.432 "unmap": true, 00:14:30.432 "flush": true, 00:14:30.432 "reset": true, 00:14:30.432 "nvme_admin": false, 00:14:30.432 "nvme_io": false, 00:14:30.432 "nvme_io_md": false, 00:14:30.432 "write_zeroes": true, 00:14:30.432 "zcopy": true, 00:14:30.432 "get_zone_info": false, 00:14:30.432 "zone_management": false, 00:14:30.432 "zone_append": false, 00:14:30.432 "compare": false, 00:14:30.432 "compare_and_write": false, 00:14:30.433 "abort": true, 00:14:30.433 "seek_hole": false, 00:14:30.433 "seek_data": false, 00:14:30.433 "copy": true, 00:14:30.433 "nvme_iov_md": false 00:14:30.433 }, 00:14:30.433 "memory_domains": [ 00:14:30.433 { 00:14:30.433 "dma_device_id": "system", 00:14:30.433 "dma_device_type": 1 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.433 "dma_device_type": 2 00:14:30.433 } 00:14:30.433 ], 00:14:30.433 "driver_specific": {} 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.433 "name": "Existed_Raid", 00:14:30.433 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:30.433 "strip_size_kb": 64, 00:14:30.433 "state": "online", 00:14:30.433 "raid_level": "concat", 00:14:30.433 "superblock": true, 00:14:30.433 "num_base_bdevs": 4, 00:14:30.433 "num_base_bdevs_discovered": 4, 00:14:30.433 "num_base_bdevs_operational": 4, 00:14:30.433 "base_bdevs_list": [ 00:14:30.433 { 00:14:30.433 "name": "NewBaseBdev", 00:14:30.433 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:30.433 "is_configured": true, 00:14:30.433 "data_offset": 2048, 00:14:30.433 "data_size": 63488 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "BaseBdev2", 00:14:30.433 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:30.433 "is_configured": true, 00:14:30.433 "data_offset": 2048, 00:14:30.433 "data_size": 63488 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "BaseBdev3", 00:14:30.433 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:30.433 "is_configured": true, 00:14:30.433 "data_offset": 2048, 00:14:30.433 "data_size": 63488 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "BaseBdev4", 00:14:30.433 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:30.433 "is_configured": true, 00:14:30.433 "data_offset": 2048, 00:14:30.433 "data_size": 63488 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 }' 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.433 20:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.004 [2024-11-26 20:27:24.434701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.004 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.004 "name": "Existed_Raid", 00:14:31.004 "aliases": [ 00:14:31.004 "91a5f0ec-e5b8-4acd-8d09-86071022debd" 00:14:31.004 ], 00:14:31.004 "product_name": "Raid Volume", 00:14:31.004 "block_size": 512, 00:14:31.004 "num_blocks": 253952, 00:14:31.004 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:31.004 "assigned_rate_limits": { 00:14:31.004 "rw_ios_per_sec": 0, 00:14:31.004 "rw_mbytes_per_sec": 0, 00:14:31.004 "r_mbytes_per_sec": 0, 00:14:31.004 "w_mbytes_per_sec": 0 00:14:31.004 }, 00:14:31.004 "claimed": false, 00:14:31.004 "zoned": false, 00:14:31.004 "supported_io_types": { 00:14:31.004 "read": true, 00:14:31.004 "write": true, 00:14:31.004 "unmap": true, 00:14:31.004 "flush": true, 00:14:31.004 "reset": true, 00:14:31.004 "nvme_admin": false, 00:14:31.004 "nvme_io": false, 00:14:31.004 "nvme_io_md": false, 00:14:31.004 "write_zeroes": true, 00:14:31.004 "zcopy": false, 00:14:31.004 "get_zone_info": false, 00:14:31.004 "zone_management": false, 00:14:31.004 "zone_append": false, 00:14:31.004 "compare": false, 00:14:31.004 "compare_and_write": false, 00:14:31.004 "abort": false, 00:14:31.004 "seek_hole": false, 00:14:31.004 "seek_data": false, 00:14:31.004 "copy": false, 00:14:31.004 "nvme_iov_md": false 00:14:31.004 }, 00:14:31.004 "memory_domains": [ 00:14:31.004 { 00:14:31.004 "dma_device_id": "system", 00:14:31.004 "dma_device_type": 1 00:14:31.004 }, 00:14:31.004 { 00:14:31.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.004 "dma_device_type": 2 00:14:31.004 }, 00:14:31.004 { 00:14:31.004 "dma_device_id": "system", 00:14:31.004 "dma_device_type": 1 00:14:31.004 }, 00:14:31.004 { 00:14:31.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.005 "dma_device_type": 2 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "dma_device_id": "system", 00:14:31.005 "dma_device_type": 1 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.005 "dma_device_type": 2 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "dma_device_id": "system", 00:14:31.005 "dma_device_type": 1 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.005 "dma_device_type": 2 00:14:31.005 } 00:14:31.005 ], 00:14:31.005 "driver_specific": { 00:14:31.005 "raid": { 00:14:31.005 "uuid": "91a5f0ec-e5b8-4acd-8d09-86071022debd", 00:14:31.005 "strip_size_kb": 64, 00:14:31.005 "state": "online", 00:14:31.005 "raid_level": "concat", 00:14:31.005 "superblock": true, 00:14:31.005 "num_base_bdevs": 4, 00:14:31.005 "num_base_bdevs_discovered": 4, 00:14:31.005 "num_base_bdevs_operational": 4, 00:14:31.005 "base_bdevs_list": [ 00:14:31.005 { 00:14:31.005 "name": "NewBaseBdev", 00:14:31.005 "uuid": "b4f49b40-0064-43ed-97d9-7436e863f404", 00:14:31.005 "is_configured": true, 00:14:31.005 "data_offset": 2048, 00:14:31.005 "data_size": 63488 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "name": "BaseBdev2", 00:14:31.005 "uuid": "cd7a2770-63a3-4538-a9fe-71e0ba88ceb4", 00:14:31.005 "is_configured": true, 00:14:31.005 "data_offset": 2048, 00:14:31.005 "data_size": 63488 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "name": "BaseBdev3", 00:14:31.005 "uuid": "dad378e0-a1ff-4b13-a32d-eb8499af0e97", 00:14:31.005 "is_configured": true, 00:14:31.005 "data_offset": 2048, 00:14:31.005 "data_size": 63488 00:14:31.005 }, 00:14:31.005 { 00:14:31.005 "name": "BaseBdev4", 00:14:31.005 "uuid": "8d470f98-dd3d-4fc9-9e25-18a96ac3c261", 00:14:31.005 "is_configured": true, 00:14:31.005 "data_offset": 2048, 00:14:31.005 "data_size": 63488 00:14:31.005 } 00:14:31.005 ] 00:14:31.005 } 00:14:31.005 } 00:14:31.005 }' 00:14:31.005 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.005 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:31.005 BaseBdev2 00:14:31.005 BaseBdev3 00:14:31.005 BaseBdev4' 00:14:31.005 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.265 [2024-11-26 20:27:24.773723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.265 [2024-11-26 20:27:24.773760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.265 [2024-11-26 20:27:24.773871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.265 [2024-11-26 20:27:24.773940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.265 [2024-11-26 20:27:24.773950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72287 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72287 ']' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72287 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72287 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72287' 00:14:31.265 killing process with pid 72287 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72287 00:14:31.265 [2024-11-26 20:27:24.814283] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.265 20:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72287 00:14:31.834 [2024-11-26 20:27:25.230084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.213 20:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:33.213 00:14:33.213 real 0m12.347s 00:14:33.213 user 0m19.665s 00:14:33.213 sys 0m2.142s 00:14:33.213 20:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.213 20:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 ************************************ 00:14:33.213 END TEST raid_state_function_test_sb 00:14:33.213 ************************************ 00:14:33.213 20:27:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:33.213 20:27:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:33.213 20:27:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.213 20:27:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 ************************************ 00:14:33.213 START TEST raid_superblock_test 00:14:33.213 ************************************ 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72963 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72963 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72963 ']' 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.213 20:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 [2024-11-26 20:27:26.579203] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:33.213 [2024-11-26 20:27:26.579435] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72963 ] 00:14:33.213 [2024-11-26 20:27:26.732769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.471 [2024-11-26 20:27:26.855308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.730 [2024-11-26 20:27:27.057333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.730 [2024-11-26 20:27:27.057378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 malloc1 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 [2024-11-26 20:27:27.484679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.988 [2024-11-26 20:27:27.484805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.988 [2024-11-26 20:27:27.484870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:33.988 [2024-11-26 20:27:27.484925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.988 [2024-11-26 20:27:27.487229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.988 [2024-11-26 20:27:27.487314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.988 pt1 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 malloc2 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.988 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 [2024-11-26 20:27:27.544340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.247 [2024-11-26 20:27:27.544471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.247 [2024-11-26 20:27:27.544522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.247 [2024-11-26 20:27:27.544604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.247 [2024-11-26 20:27:27.547080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.247 [2024-11-26 20:27:27.547159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.247 pt2 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 malloc3 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 [2024-11-26 20:27:27.622961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:34.247 [2024-11-26 20:27:27.623109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.247 [2024-11-26 20:27:27.623202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:34.247 [2024-11-26 20:27:27.623291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.247 [2024-11-26 20:27:27.625875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.247 [2024-11-26 20:27:27.625986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:34.247 pt3 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 malloc4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 [2024-11-26 20:27:27.683707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:34.247 [2024-11-26 20:27:27.683788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.247 [2024-11-26 20:27:27.683819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:34.247 [2024-11-26 20:27:27.683832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.247 [2024-11-26 20:27:27.686779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.247 pt4 00:14:34.247 [2024-11-26 20:27:27.686879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 [2024-11-26 20:27:27.695808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.247 [2024-11-26 20:27:27.698257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.247 [2024-11-26 20:27:27.698403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:34.247 [2024-11-26 20:27:27.698473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:34.247 [2024-11-26 20:27:27.698715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:34.247 [2024-11-26 20:27:27.698740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:34.247 [2024-11-26 20:27:27.699109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:34.247 [2024-11-26 20:27:27.699367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:34.247 [2024-11-26 20:27:27.699392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:34.247 [2024-11-26 20:27:27.699591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.247 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.247 "name": "raid_bdev1", 00:14:34.247 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:34.247 "strip_size_kb": 64, 00:14:34.247 "state": "online", 00:14:34.247 "raid_level": "concat", 00:14:34.247 "superblock": true, 00:14:34.247 "num_base_bdevs": 4, 00:14:34.247 "num_base_bdevs_discovered": 4, 00:14:34.247 "num_base_bdevs_operational": 4, 00:14:34.247 "base_bdevs_list": [ 00:14:34.248 { 00:14:34.248 "name": "pt1", 00:14:34.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.248 "is_configured": true, 00:14:34.248 "data_offset": 2048, 00:14:34.248 "data_size": 63488 00:14:34.248 }, 00:14:34.248 { 00:14:34.248 "name": "pt2", 00:14:34.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.248 "is_configured": true, 00:14:34.248 "data_offset": 2048, 00:14:34.248 "data_size": 63488 00:14:34.248 }, 00:14:34.248 { 00:14:34.248 "name": "pt3", 00:14:34.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.248 "is_configured": true, 00:14:34.248 "data_offset": 2048, 00:14:34.248 "data_size": 63488 00:14:34.248 }, 00:14:34.248 { 00:14:34.248 "name": "pt4", 00:14:34.248 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:34.248 "is_configured": true, 00:14:34.248 "data_offset": 2048, 00:14:34.248 "data_size": 63488 00:14:34.248 } 00:14:34.248 ] 00:14:34.248 }' 00:14:34.248 20:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.248 20:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.813 [2024-11-26 20:27:28.219291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.813 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.813 "name": "raid_bdev1", 00:14:34.813 "aliases": [ 00:14:34.813 "5247403e-bd73-4ace-bd25-82b0e9cbaf7f" 00:14:34.813 ], 00:14:34.813 "product_name": "Raid Volume", 00:14:34.813 "block_size": 512, 00:14:34.813 "num_blocks": 253952, 00:14:34.813 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:34.813 "assigned_rate_limits": { 00:14:34.813 "rw_ios_per_sec": 0, 00:14:34.813 "rw_mbytes_per_sec": 0, 00:14:34.813 "r_mbytes_per_sec": 0, 00:14:34.813 "w_mbytes_per_sec": 0 00:14:34.813 }, 00:14:34.813 "claimed": false, 00:14:34.813 "zoned": false, 00:14:34.813 "supported_io_types": { 00:14:34.813 "read": true, 00:14:34.813 "write": true, 00:14:34.813 "unmap": true, 00:14:34.813 "flush": true, 00:14:34.813 "reset": true, 00:14:34.813 "nvme_admin": false, 00:14:34.813 "nvme_io": false, 00:14:34.813 "nvme_io_md": false, 00:14:34.813 "write_zeroes": true, 00:14:34.813 "zcopy": false, 00:14:34.814 "get_zone_info": false, 00:14:34.814 "zone_management": false, 00:14:34.814 "zone_append": false, 00:14:34.814 "compare": false, 00:14:34.814 "compare_and_write": false, 00:14:34.814 "abort": false, 00:14:34.814 "seek_hole": false, 00:14:34.814 "seek_data": false, 00:14:34.814 "copy": false, 00:14:34.814 "nvme_iov_md": false 00:14:34.814 }, 00:14:34.814 "memory_domains": [ 00:14:34.814 { 00:14:34.814 "dma_device_id": "system", 00:14:34.814 "dma_device_type": 1 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.814 "dma_device_type": 2 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "system", 00:14:34.814 "dma_device_type": 1 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.814 "dma_device_type": 2 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "system", 00:14:34.814 "dma_device_type": 1 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.814 "dma_device_type": 2 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "system", 00:14:34.814 "dma_device_type": 1 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.814 "dma_device_type": 2 00:14:34.814 } 00:14:34.814 ], 00:14:34.814 "driver_specific": { 00:14:34.814 "raid": { 00:14:34.814 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:34.814 "strip_size_kb": 64, 00:14:34.814 "state": "online", 00:14:34.814 "raid_level": "concat", 00:14:34.814 "superblock": true, 00:14:34.814 "num_base_bdevs": 4, 00:14:34.814 "num_base_bdevs_discovered": 4, 00:14:34.814 "num_base_bdevs_operational": 4, 00:14:34.814 "base_bdevs_list": [ 00:14:34.814 { 00:14:34.814 "name": "pt1", 00:14:34.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.814 "is_configured": true, 00:14:34.814 "data_offset": 2048, 00:14:34.814 "data_size": 63488 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "name": "pt2", 00:14:34.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.814 "is_configured": true, 00:14:34.814 "data_offset": 2048, 00:14:34.814 "data_size": 63488 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "name": "pt3", 00:14:34.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.814 "is_configured": true, 00:14:34.814 "data_offset": 2048, 00:14:34.814 "data_size": 63488 00:14:34.814 }, 00:14:34.814 { 00:14:34.814 "name": "pt4", 00:14:34.814 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:34.814 "is_configured": true, 00:14:34.814 "data_offset": 2048, 00:14:34.814 "data_size": 63488 00:14:34.814 } 00:14:34.814 ] 00:14:34.814 } 00:14:34.814 } 00:14:34.814 }' 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:34.814 pt2 00:14:34.814 pt3 00:14:34.814 pt4' 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.072 [2024-11-26 20:27:28.526764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5247403e-bd73-4ace-bd25-82b0e9cbaf7f 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5247403e-bd73-4ace-bd25-82b0e9cbaf7f ']' 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.072 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.072 [2024-11-26 20:27:28.574314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.072 [2024-11-26 20:27:28.574344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.072 [2024-11-26 20:27:28.574441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.072 [2024-11-26 20:27:28.574521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.073 [2024-11-26 20:27:28.574537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.073 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.073 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:35.073 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.073 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.073 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.073 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.332 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.332 [2024-11-26 20:27:28.742054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:35.332 [2024-11-26 20:27:28.744079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:35.332 [2024-11-26 20:27:28.744201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:35.332 [2024-11-26 20:27:28.744247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:35.332 [2024-11-26 20:27:28.744338] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:35.332 [2024-11-26 20:27:28.744404] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:35.332 [2024-11-26 20:27:28.744430] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:35.332 [2024-11-26 20:27:28.744454] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:35.332 [2024-11-26 20:27:28.744471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.333 [2024-11-26 20:27:28.744485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:35.333 request: 00:14:35.333 { 00:14:35.333 "name": "raid_bdev1", 00:14:35.333 "raid_level": "concat", 00:14:35.333 "base_bdevs": [ 00:14:35.333 "malloc1", 00:14:35.333 "malloc2", 00:14:35.333 "malloc3", 00:14:35.333 "malloc4" 00:14:35.333 ], 00:14:35.333 "strip_size_kb": 64, 00:14:35.333 "superblock": false, 00:14:35.333 "method": "bdev_raid_create", 00:14:35.333 "req_id": 1 00:14:35.333 } 00:14:35.333 Got JSON-RPC error response 00:14:35.333 response: 00:14:35.333 { 00:14:35.333 "code": -17, 00:14:35.333 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:35.333 } 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.333 [2024-11-26 20:27:28.809878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.333 [2024-11-26 20:27:28.810003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.333 [2024-11-26 20:27:28.810047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:35.333 [2024-11-26 20:27:28.810096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.333 [2024-11-26 20:27:28.812586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.333 [2024-11-26 20:27:28.812694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.333 [2024-11-26 20:27:28.812839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:35.333 [2024-11-26 20:27:28.812959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:35.333 pt1 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.333 "name": "raid_bdev1", 00:14:35.333 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:35.333 "strip_size_kb": 64, 00:14:35.333 "state": "configuring", 00:14:35.333 "raid_level": "concat", 00:14:35.333 "superblock": true, 00:14:35.333 "num_base_bdevs": 4, 00:14:35.333 "num_base_bdevs_discovered": 1, 00:14:35.333 "num_base_bdevs_operational": 4, 00:14:35.333 "base_bdevs_list": [ 00:14:35.333 { 00:14:35.333 "name": "pt1", 00:14:35.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.333 "is_configured": true, 00:14:35.333 "data_offset": 2048, 00:14:35.333 "data_size": 63488 00:14:35.333 }, 00:14:35.333 { 00:14:35.333 "name": null, 00:14:35.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.333 "is_configured": false, 00:14:35.333 "data_offset": 2048, 00:14:35.333 "data_size": 63488 00:14:35.333 }, 00:14:35.333 { 00:14:35.333 "name": null, 00:14:35.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.333 "is_configured": false, 00:14:35.333 "data_offset": 2048, 00:14:35.333 "data_size": 63488 00:14:35.333 }, 00:14:35.333 { 00:14:35.333 "name": null, 00:14:35.333 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.333 "is_configured": false, 00:14:35.333 "data_offset": 2048, 00:14:35.333 "data_size": 63488 00:14:35.333 } 00:14:35.333 ] 00:14:35.333 }' 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.333 20:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.901 [2024-11-26 20:27:29.309072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:35.901 [2024-11-26 20:27:29.309164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.901 [2024-11-26 20:27:29.309188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:35.901 [2024-11-26 20:27:29.309201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.901 [2024-11-26 20:27:29.309716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.901 [2024-11-26 20:27:29.309747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:35.901 [2024-11-26 20:27:29.309840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:35.901 [2024-11-26 20:27:29.309867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:35.901 pt2 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.901 [2024-11-26 20:27:29.321070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.901 "name": "raid_bdev1", 00:14:35.901 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:35.901 "strip_size_kb": 64, 00:14:35.901 "state": "configuring", 00:14:35.901 "raid_level": "concat", 00:14:35.901 "superblock": true, 00:14:35.901 "num_base_bdevs": 4, 00:14:35.901 "num_base_bdevs_discovered": 1, 00:14:35.901 "num_base_bdevs_operational": 4, 00:14:35.901 "base_bdevs_list": [ 00:14:35.901 { 00:14:35.901 "name": "pt1", 00:14:35.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.901 "is_configured": true, 00:14:35.901 "data_offset": 2048, 00:14:35.901 "data_size": 63488 00:14:35.901 }, 00:14:35.901 { 00:14:35.901 "name": null, 00:14:35.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.901 "is_configured": false, 00:14:35.901 "data_offset": 0, 00:14:35.901 "data_size": 63488 00:14:35.901 }, 00:14:35.901 { 00:14:35.901 "name": null, 00:14:35.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.901 "is_configured": false, 00:14:35.901 "data_offset": 2048, 00:14:35.901 "data_size": 63488 00:14:35.901 }, 00:14:35.901 { 00:14:35.901 "name": null, 00:14:35.901 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.901 "is_configured": false, 00:14:35.901 "data_offset": 2048, 00:14:35.901 "data_size": 63488 00:14:35.901 } 00:14:35.901 ] 00:14:35.901 }' 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.901 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.470 [2024-11-26 20:27:29.776387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.470 [2024-11-26 20:27:29.776537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.470 [2024-11-26 20:27:29.776582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:36.470 [2024-11-26 20:27:29.776648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.470 [2024-11-26 20:27:29.777170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.470 [2024-11-26 20:27:29.777236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.470 [2024-11-26 20:27:29.777398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:36.470 [2024-11-26 20:27:29.777461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.470 pt2 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:36.470 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.471 [2024-11-26 20:27:29.788316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:36.471 [2024-11-26 20:27:29.788419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.471 [2024-11-26 20:27:29.788458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:36.471 [2024-11-26 20:27:29.788489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.471 [2024-11-26 20:27:29.789006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.471 [2024-11-26 20:27:29.789085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:36.471 [2024-11-26 20:27:29.789204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:36.471 [2024-11-26 20:27:29.789286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:36.471 pt3 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.471 [2024-11-26 20:27:29.800237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:36.471 [2024-11-26 20:27:29.800329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.471 [2024-11-26 20:27:29.800382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:36.471 [2024-11-26 20:27:29.800413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.471 [2024-11-26 20:27:29.800879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.471 [2024-11-26 20:27:29.800937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:36.471 [2024-11-26 20:27:29.801048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:36.471 [2024-11-26 20:27:29.801103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:36.471 [2024-11-26 20:27:29.801300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:36.471 [2024-11-26 20:27:29.801343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:36.471 [2024-11-26 20:27:29.801615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.471 [2024-11-26 20:27:29.801819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:36.471 [2024-11-26 20:27:29.801838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:36.471 [2024-11-26 20:27:29.801986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.471 pt4 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.471 "name": "raid_bdev1", 00:14:36.471 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:36.471 "strip_size_kb": 64, 00:14:36.471 "state": "online", 00:14:36.471 "raid_level": "concat", 00:14:36.471 "superblock": true, 00:14:36.471 "num_base_bdevs": 4, 00:14:36.471 "num_base_bdevs_discovered": 4, 00:14:36.471 "num_base_bdevs_operational": 4, 00:14:36.471 "base_bdevs_list": [ 00:14:36.471 { 00:14:36.471 "name": "pt1", 00:14:36.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.471 "is_configured": true, 00:14:36.471 "data_offset": 2048, 00:14:36.471 "data_size": 63488 00:14:36.471 }, 00:14:36.471 { 00:14:36.471 "name": "pt2", 00:14:36.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.471 "is_configured": true, 00:14:36.471 "data_offset": 2048, 00:14:36.471 "data_size": 63488 00:14:36.471 }, 00:14:36.471 { 00:14:36.471 "name": "pt3", 00:14:36.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.471 "is_configured": true, 00:14:36.471 "data_offset": 2048, 00:14:36.471 "data_size": 63488 00:14:36.471 }, 00:14:36.471 { 00:14:36.471 "name": "pt4", 00:14:36.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.471 "is_configured": true, 00:14:36.471 "data_offset": 2048, 00:14:36.471 "data_size": 63488 00:14:36.471 } 00:14:36.471 ] 00:14:36.471 }' 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.471 20:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.731 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.991 [2024-11-26 20:27:30.287846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.991 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.991 "name": "raid_bdev1", 00:14:36.991 "aliases": [ 00:14:36.991 "5247403e-bd73-4ace-bd25-82b0e9cbaf7f" 00:14:36.991 ], 00:14:36.991 "product_name": "Raid Volume", 00:14:36.991 "block_size": 512, 00:14:36.991 "num_blocks": 253952, 00:14:36.991 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:36.991 "assigned_rate_limits": { 00:14:36.991 "rw_ios_per_sec": 0, 00:14:36.991 "rw_mbytes_per_sec": 0, 00:14:36.991 "r_mbytes_per_sec": 0, 00:14:36.991 "w_mbytes_per_sec": 0 00:14:36.991 }, 00:14:36.991 "claimed": false, 00:14:36.991 "zoned": false, 00:14:36.991 "supported_io_types": { 00:14:36.991 "read": true, 00:14:36.991 "write": true, 00:14:36.991 "unmap": true, 00:14:36.991 "flush": true, 00:14:36.991 "reset": true, 00:14:36.991 "nvme_admin": false, 00:14:36.991 "nvme_io": false, 00:14:36.991 "nvme_io_md": false, 00:14:36.991 "write_zeroes": true, 00:14:36.991 "zcopy": false, 00:14:36.991 "get_zone_info": false, 00:14:36.991 "zone_management": false, 00:14:36.991 "zone_append": false, 00:14:36.991 "compare": false, 00:14:36.991 "compare_and_write": false, 00:14:36.991 "abort": false, 00:14:36.991 "seek_hole": false, 00:14:36.991 "seek_data": false, 00:14:36.991 "copy": false, 00:14:36.991 "nvme_iov_md": false 00:14:36.991 }, 00:14:36.991 "memory_domains": [ 00:14:36.991 { 00:14:36.991 "dma_device_id": "system", 00:14:36.991 "dma_device_type": 1 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.991 "dma_device_type": 2 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "system", 00:14:36.991 "dma_device_type": 1 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.991 "dma_device_type": 2 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "system", 00:14:36.991 "dma_device_type": 1 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.991 "dma_device_type": 2 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "system", 00:14:36.991 "dma_device_type": 1 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.991 "dma_device_type": 2 00:14:36.991 } 00:14:36.991 ], 00:14:36.991 "driver_specific": { 00:14:36.991 "raid": { 00:14:36.991 "uuid": "5247403e-bd73-4ace-bd25-82b0e9cbaf7f", 00:14:36.991 "strip_size_kb": 64, 00:14:36.991 "state": "online", 00:14:36.991 "raid_level": "concat", 00:14:36.991 "superblock": true, 00:14:36.991 "num_base_bdevs": 4, 00:14:36.991 "num_base_bdevs_discovered": 4, 00:14:36.991 "num_base_bdevs_operational": 4, 00:14:36.991 "base_bdevs_list": [ 00:14:36.991 { 00:14:36.991 "name": "pt1", 00:14:36.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.991 "is_configured": true, 00:14:36.991 "data_offset": 2048, 00:14:36.991 "data_size": 63488 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "name": "pt2", 00:14:36.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.991 "is_configured": true, 00:14:36.991 "data_offset": 2048, 00:14:36.991 "data_size": 63488 00:14:36.991 }, 00:14:36.991 { 00:14:36.991 "name": "pt3", 00:14:36.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.991 "is_configured": true, 00:14:36.992 "data_offset": 2048, 00:14:36.992 "data_size": 63488 00:14:36.992 }, 00:14:36.992 { 00:14:36.992 "name": "pt4", 00:14:36.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.992 "is_configured": true, 00:14:36.992 "data_offset": 2048, 00:14:36.992 "data_size": 63488 00:14:36.992 } 00:14:36.992 ] 00:14:36.992 } 00:14:36.992 } 00:14:36.992 }' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:36.992 pt2 00:14:36.992 pt3 00:14:36.992 pt4' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.992 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:37.253 [2024-11-26 20:27:30.615242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5247403e-bd73-4ace-bd25-82b0e9cbaf7f '!=' 5247403e-bd73-4ace-bd25-82b0e9cbaf7f ']' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72963 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72963 ']' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72963 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72963 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72963' 00:14:37.253 killing process with pid 72963 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72963 00:14:37.253 [2024-11-26 20:27:30.708388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.253 20:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72963 00:14:37.253 [2024-11-26 20:27:30.708579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.253 [2024-11-26 20:27:30.708696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.253 [2024-11-26 20:27:30.708707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:37.824 [2024-11-26 20:27:31.148956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.206 20:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:39.206 00:14:39.206 real 0m5.903s 00:14:39.206 user 0m8.411s 00:14:39.206 sys 0m0.965s 00:14:39.206 20:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.206 20:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.206 ************************************ 00:14:39.206 END TEST raid_superblock_test 00:14:39.206 ************************************ 00:14:39.206 20:27:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:39.206 20:27:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:39.206 20:27:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.206 20:27:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.206 ************************************ 00:14:39.206 START TEST raid_read_error_test 00:14:39.206 ************************************ 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bmtDuVYE6P 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73234 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73234 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73234 ']' 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.206 20:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.206 [2024-11-26 20:27:32.562441] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:39.206 [2024-11-26 20:27:32.562647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73234 ] 00:14:39.206 [2024-11-26 20:27:32.737782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.465 [2024-11-26 20:27:32.856668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.725 [2024-11-26 20:27:33.072081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.725 [2024-11-26 20:27:33.072263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.984 BaseBdev1_malloc 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.984 true 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.984 [2024-11-26 20:27:33.493837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:39.984 [2024-11-26 20:27:33.493993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.984 [2024-11-26 20:27:33.494027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:39.984 [2024-11-26 20:27:33.494042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.984 [2024-11-26 20:27:33.496630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.984 [2024-11-26 20:27:33.496709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:39.984 BaseBdev1 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.984 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 BaseBdev2_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 true 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 [2024-11-26 20:27:33.566476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:40.244 [2024-11-26 20:27:33.566534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.244 [2024-11-26 20:27:33.566557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:40.244 [2024-11-26 20:27:33.566568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.244 [2024-11-26 20:27:33.568856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.244 [2024-11-26 20:27:33.568972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:40.244 BaseBdev2 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 BaseBdev3_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 true 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 [2024-11-26 20:27:33.648849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:40.244 [2024-11-26 20:27:33.648910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.244 [2024-11-26 20:27:33.648932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:40.244 [2024-11-26 20:27:33.648945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.244 [2024-11-26 20:27:33.651396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.244 [2024-11-26 20:27:33.651499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:40.244 BaseBdev3 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 BaseBdev4_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 true 00:14:40.244 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.245 [2024-11-26 20:27:33.722686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:40.245 [2024-11-26 20:27:33.722757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.245 [2024-11-26 20:27:33.722783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:40.245 [2024-11-26 20:27:33.722795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.245 [2024-11-26 20:27:33.725352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.245 [2024-11-26 20:27:33.725497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:40.245 BaseBdev4 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.245 [2024-11-26 20:27:33.734818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.245 [2024-11-26 20:27:33.737057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.245 [2024-11-26 20:27:33.737234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.245 [2024-11-26 20:27:33.737333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.245 [2024-11-26 20:27:33.737641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:40.245 [2024-11-26 20:27:33.737659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:40.245 [2024-11-26 20:27:33.737995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:40.245 [2024-11-26 20:27:33.738215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:40.245 [2024-11-26 20:27:33.738228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:40.245 [2024-11-26 20:27:33.738460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.245 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.506 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.506 "name": "raid_bdev1", 00:14:40.506 "uuid": "cb8e4409-d5a6-418f-9e1e-1aa70cf051eb", 00:14:40.506 "strip_size_kb": 64, 00:14:40.506 "state": "online", 00:14:40.506 "raid_level": "concat", 00:14:40.506 "superblock": true, 00:14:40.506 "num_base_bdevs": 4, 00:14:40.506 "num_base_bdevs_discovered": 4, 00:14:40.506 "num_base_bdevs_operational": 4, 00:14:40.506 "base_bdevs_list": [ 00:14:40.506 { 00:14:40.506 "name": "BaseBdev1", 00:14:40.506 "uuid": "a61b6b27-ceac-577a-b41f-f802de13a18b", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 }, 00:14:40.506 { 00:14:40.506 "name": "BaseBdev2", 00:14:40.506 "uuid": "003a6eb6-2277-52e5-b572-b3aeb657bd01", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 }, 00:14:40.506 { 00:14:40.506 "name": "BaseBdev3", 00:14:40.506 "uuid": "abfabf72-ce21-5613-93e4-181ddb1b1ec2", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 }, 00:14:40.506 { 00:14:40.506 "name": "BaseBdev4", 00:14:40.506 "uuid": "45814eb3-d31f-5b21-b02a-202151add41b", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 } 00:14:40.506 ] 00:14:40.506 }' 00:14:40.506 20:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.506 20:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.766 20:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:40.766 20:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:40.766 [2024-11-26 20:27:34.271280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.707 "name": "raid_bdev1", 00:14:41.707 "uuid": "cb8e4409-d5a6-418f-9e1e-1aa70cf051eb", 00:14:41.707 "strip_size_kb": 64, 00:14:41.707 "state": "online", 00:14:41.707 "raid_level": "concat", 00:14:41.707 "superblock": true, 00:14:41.707 "num_base_bdevs": 4, 00:14:41.707 "num_base_bdevs_discovered": 4, 00:14:41.707 "num_base_bdevs_operational": 4, 00:14:41.707 "base_bdevs_list": [ 00:14:41.707 { 00:14:41.707 "name": "BaseBdev1", 00:14:41.707 "uuid": "a61b6b27-ceac-577a-b41f-f802de13a18b", 00:14:41.707 "is_configured": true, 00:14:41.707 "data_offset": 2048, 00:14:41.707 "data_size": 63488 00:14:41.707 }, 00:14:41.707 { 00:14:41.707 "name": "BaseBdev2", 00:14:41.707 "uuid": "003a6eb6-2277-52e5-b572-b3aeb657bd01", 00:14:41.707 "is_configured": true, 00:14:41.707 "data_offset": 2048, 00:14:41.707 "data_size": 63488 00:14:41.707 }, 00:14:41.707 { 00:14:41.707 "name": "BaseBdev3", 00:14:41.707 "uuid": "abfabf72-ce21-5613-93e4-181ddb1b1ec2", 00:14:41.707 "is_configured": true, 00:14:41.707 "data_offset": 2048, 00:14:41.707 "data_size": 63488 00:14:41.707 }, 00:14:41.707 { 00:14:41.707 "name": "BaseBdev4", 00:14:41.707 "uuid": "45814eb3-d31f-5b21-b02a-202151add41b", 00:14:41.707 "is_configured": true, 00:14:41.707 "data_offset": 2048, 00:14:41.707 "data_size": 63488 00:14:41.707 } 00:14:41.707 ] 00:14:41.707 }' 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.707 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.276 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.276 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.276 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.276 [2024-11-26 20:27:35.676425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.276 [2024-11-26 20:27:35.676528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.276 [2024-11-26 20:27:35.679829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.276 [2024-11-26 20:27:35.679946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.276 [2024-11-26 20:27:35.680027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.276 [2024-11-26 20:27:35.680080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:42.276 { 00:14:42.276 "results": [ 00:14:42.276 { 00:14:42.276 "job": "raid_bdev1", 00:14:42.276 "core_mask": "0x1", 00:14:42.276 "workload": "randrw", 00:14:42.276 "percentage": 50, 00:14:42.276 "status": "finished", 00:14:42.276 "queue_depth": 1, 00:14:42.276 "io_size": 131072, 00:14:42.276 "runtime": 1.405826, 00:14:42.276 "iops": 13959.764579684826, 00:14:42.277 "mibps": 1744.9705724606033, 00:14:42.277 "io_failed": 1, 00:14:42.277 "io_timeout": 0, 00:14:42.277 "avg_latency_us": 99.25964745990191, 00:14:42.277 "min_latency_us": 28.05938864628821, 00:14:42.277 "max_latency_us": 1752.8733624454148 00:14:42.277 } 00:14:42.277 ], 00:14:42.277 "core_count": 1 00:14:42.277 } 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73234 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73234 ']' 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73234 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73234 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.277 killing process with pid 73234 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73234' 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73234 00:14:42.277 [2024-11-26 20:27:35.722893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.277 20:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73234 00:14:42.542 [2024-11-26 20:27:36.095166] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bmtDuVYE6P 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:43.926 00:14:43.926 real 0m4.959s 00:14:43.926 user 0m5.851s 00:14:43.926 sys 0m0.586s 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.926 20:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.926 ************************************ 00:14:43.926 END TEST raid_read_error_test 00:14:43.926 ************************************ 00:14:43.926 20:27:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:43.926 20:27:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.926 20:27:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.926 20:27:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.926 ************************************ 00:14:43.926 START TEST raid_write_error_test 00:14:43.926 ************************************ 00:14:43.926 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:14:43.926 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:43.926 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:43.926 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J5rF5n93ly 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73375 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73375 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73375 ']' 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.186 20:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.186 [2024-11-26 20:27:37.592355] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:44.186 [2024-11-26 20:27:37.592484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73375 ] 00:14:44.471 [2024-11-26 20:27:37.770532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.471 [2024-11-26 20:27:37.903334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.729 [2024-11-26 20:27:38.135182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.729 [2024-11-26 20:27:38.135268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.988 BaseBdev1_malloc 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.988 true 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.988 [2024-11-26 20:27:38.523458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:44.988 [2024-11-26 20:27:38.523586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.988 [2024-11-26 20:27:38.523615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:44.988 [2024-11-26 20:27:38.523629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.988 [2024-11-26 20:27:38.526153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.988 [2024-11-26 20:27:38.526198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.988 BaseBdev1 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.988 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 BaseBdev2_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 true 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 [2024-11-26 20:27:38.597523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:45.248 [2024-11-26 20:27:38.597657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.248 [2024-11-26 20:27:38.597686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:45.248 [2024-11-26 20:27:38.597699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.248 [2024-11-26 20:27:38.600091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.248 [2024-11-26 20:27:38.600135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:45.248 BaseBdev2 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 BaseBdev3_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 true 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 [2024-11-26 20:27:38.682232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:45.248 [2024-11-26 20:27:38.682296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.248 [2024-11-26 20:27:38.682316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:45.248 [2024-11-26 20:27:38.682326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.248 [2024-11-26 20:27:38.684523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.248 [2024-11-26 20:27:38.684560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:45.248 BaseBdev3 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 BaseBdev4_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 true 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 [2024-11-26 20:27:38.752964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:45.248 [2024-11-26 20:27:38.753025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.248 [2024-11-26 20:27:38.753046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:45.248 [2024-11-26 20:27:38.753059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.248 [2024-11-26 20:27:38.755328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.248 [2024-11-26 20:27:38.755369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:45.248 BaseBdev4 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 [2024-11-26 20:27:38.765019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.248 [2024-11-26 20:27:38.766990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.248 [2024-11-26 20:27:38.767116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.248 [2024-11-26 20:27:38.767185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.248 [2024-11-26 20:27:38.767460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:45.248 [2024-11-26 20:27:38.767477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:45.248 [2024-11-26 20:27:38.767739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:45.248 [2024-11-26 20:27:38.767912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:45.248 [2024-11-26 20:27:38.767924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:45.248 [2024-11-26 20:27:38.768086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.248 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.507 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.507 "name": "raid_bdev1", 00:14:45.507 "uuid": "90e987d0-f94c-4d96-be65-17230c84358a", 00:14:45.507 "strip_size_kb": 64, 00:14:45.507 "state": "online", 00:14:45.507 "raid_level": "concat", 00:14:45.507 "superblock": true, 00:14:45.507 "num_base_bdevs": 4, 00:14:45.507 "num_base_bdevs_discovered": 4, 00:14:45.507 "num_base_bdevs_operational": 4, 00:14:45.507 "base_bdevs_list": [ 00:14:45.507 { 00:14:45.507 "name": "BaseBdev1", 00:14:45.507 "uuid": "5a2b3e93-edc3-57e8-84d9-3ef925379369", 00:14:45.507 "is_configured": true, 00:14:45.507 "data_offset": 2048, 00:14:45.507 "data_size": 63488 00:14:45.507 }, 00:14:45.507 { 00:14:45.507 "name": "BaseBdev2", 00:14:45.507 "uuid": "a01175e2-67a6-50b7-9c68-3352f0a133cc", 00:14:45.507 "is_configured": true, 00:14:45.507 "data_offset": 2048, 00:14:45.507 "data_size": 63488 00:14:45.507 }, 00:14:45.507 { 00:14:45.507 "name": "BaseBdev3", 00:14:45.507 "uuid": "07861d09-722a-5996-8061-e159b0296d93", 00:14:45.507 "is_configured": true, 00:14:45.507 "data_offset": 2048, 00:14:45.507 "data_size": 63488 00:14:45.507 }, 00:14:45.507 { 00:14:45.507 "name": "BaseBdev4", 00:14:45.507 "uuid": "59b36e7e-3807-5451-974d-065fa2682d05", 00:14:45.507 "is_configured": true, 00:14:45.507 "data_offset": 2048, 00:14:45.507 "data_size": 63488 00:14:45.507 } 00:14:45.507 ] 00:14:45.507 }' 00:14:45.507 20:27:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.507 20:27:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.769 20:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:45.769 20:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.769 [2024-11-26 20:27:39.298005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.718 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.718 "name": "raid_bdev1", 00:14:46.718 "uuid": "90e987d0-f94c-4d96-be65-17230c84358a", 00:14:46.718 "strip_size_kb": 64, 00:14:46.718 "state": "online", 00:14:46.718 "raid_level": "concat", 00:14:46.718 "superblock": true, 00:14:46.718 "num_base_bdevs": 4, 00:14:46.718 "num_base_bdevs_discovered": 4, 00:14:46.718 "num_base_bdevs_operational": 4, 00:14:46.718 "base_bdevs_list": [ 00:14:46.718 { 00:14:46.718 "name": "BaseBdev1", 00:14:46.718 "uuid": "5a2b3e93-edc3-57e8-84d9-3ef925379369", 00:14:46.718 "is_configured": true, 00:14:46.718 "data_offset": 2048, 00:14:46.718 "data_size": 63488 00:14:46.718 }, 00:14:46.718 { 00:14:46.719 "name": "BaseBdev2", 00:14:46.719 "uuid": "a01175e2-67a6-50b7-9c68-3352f0a133cc", 00:14:46.719 "is_configured": true, 00:14:46.719 "data_offset": 2048, 00:14:46.719 "data_size": 63488 00:14:46.719 }, 00:14:46.719 { 00:14:46.719 "name": "BaseBdev3", 00:14:46.719 "uuid": "07861d09-722a-5996-8061-e159b0296d93", 00:14:46.719 "is_configured": true, 00:14:46.719 "data_offset": 2048, 00:14:46.719 "data_size": 63488 00:14:46.719 }, 00:14:46.719 { 00:14:46.719 "name": "BaseBdev4", 00:14:46.719 "uuid": "59b36e7e-3807-5451-974d-065fa2682d05", 00:14:46.719 "is_configured": true, 00:14:46.719 "data_offset": 2048, 00:14:46.719 "data_size": 63488 00:14:46.719 } 00:14:46.719 ] 00:14:46.719 }' 00:14:46.719 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.719 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.288 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:47.288 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.288 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.289 [2024-11-26 20:27:40.691795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.289 [2024-11-26 20:27:40.691838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.289 [2024-11-26 20:27:40.695238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.289 [2024-11-26 20:27:40.695367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.289 [2024-11-26 20:27:40.695455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.289 [2024-11-26 20:27:40.695517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:47.289 { 00:14:47.289 "results": [ 00:14:47.289 { 00:14:47.289 "job": "raid_bdev1", 00:14:47.289 "core_mask": "0x1", 00:14:47.289 "workload": "randrw", 00:14:47.289 "percentage": 50, 00:14:47.289 "status": "finished", 00:14:47.289 "queue_depth": 1, 00:14:47.289 "io_size": 131072, 00:14:47.289 "runtime": 1.393954, 00:14:47.289 "iops": 13049.928476836394, 00:14:47.289 "mibps": 1631.2410596045493, 00:14:47.289 "io_failed": 1, 00:14:47.289 "io_timeout": 0, 00:14:47.289 "avg_latency_us": 106.45178724368502, 00:14:47.289 "min_latency_us": 28.618340611353712, 00:14:47.289 "max_latency_us": 1602.6270742358079 00:14:47.289 } 00:14:47.289 ], 00:14:47.289 "core_count": 1 00:14:47.289 } 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73375 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73375 ']' 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73375 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73375 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73375' 00:14:47.289 killing process with pid 73375 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73375 00:14:47.289 [2024-11-26 20:27:40.740620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.289 20:27:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73375 00:14:47.548 [2024-11-26 20:27:41.098880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J5rF5n93ly 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:48.928 00:14:48.928 real 0m4.953s 00:14:48.928 user 0m5.822s 00:14:48.928 sys 0m0.619s 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.928 20:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.928 ************************************ 00:14:48.928 END TEST raid_write_error_test 00:14:48.928 ************************************ 00:14:49.188 20:27:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:49.188 20:27:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:49.188 20:27:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:49.188 20:27:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.188 20:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 ************************************ 00:14:49.188 START TEST raid_state_function_test 00:14:49.188 ************************************ 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73523 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73523' 00:14:49.188 Process raid pid: 73523 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73523 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73523 ']' 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.188 20:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 [2024-11-26 20:27:42.604511] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:14:49.189 [2024-11-26 20:27:42.604661] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.448 [2024-11-26 20:27:42.773073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.448 [2024-11-26 20:27:42.898531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.706 [2024-11-26 20:27:43.126122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.706 [2024-11-26 20:27:43.126272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.965 [2024-11-26 20:27:43.490095] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.965 [2024-11-26 20:27:43.490157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.965 [2024-11-26 20:27:43.490169] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.965 [2024-11-26 20:27:43.490180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.965 [2024-11-26 20:27:43.490187] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.965 [2024-11-26 20:27:43.490197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.965 [2024-11-26 20:27:43.490210] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:49.965 [2024-11-26 20:27:43.490220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.965 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.224 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.224 "name": "Existed_Raid", 00:14:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.224 "strip_size_kb": 0, 00:14:50.224 "state": "configuring", 00:14:50.224 "raid_level": "raid1", 00:14:50.224 "superblock": false, 00:14:50.224 "num_base_bdevs": 4, 00:14:50.224 "num_base_bdevs_discovered": 0, 00:14:50.224 "num_base_bdevs_operational": 4, 00:14:50.224 "base_bdevs_list": [ 00:14:50.224 { 00:14:50.224 "name": "BaseBdev1", 00:14:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.224 "is_configured": false, 00:14:50.224 "data_offset": 0, 00:14:50.224 "data_size": 0 00:14:50.224 }, 00:14:50.224 { 00:14:50.224 "name": "BaseBdev2", 00:14:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.224 "is_configured": false, 00:14:50.224 "data_offset": 0, 00:14:50.224 "data_size": 0 00:14:50.224 }, 00:14:50.224 { 00:14:50.224 "name": "BaseBdev3", 00:14:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.224 "is_configured": false, 00:14:50.224 "data_offset": 0, 00:14:50.224 "data_size": 0 00:14:50.224 }, 00:14:50.224 { 00:14:50.224 "name": "BaseBdev4", 00:14:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.224 "is_configured": false, 00:14:50.224 "data_offset": 0, 00:14:50.224 "data_size": 0 00:14:50.224 } 00:14:50.224 ] 00:14:50.224 }' 00:14:50.224 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.224 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.489 [2024-11-26 20:27:43.981250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.489 [2024-11-26 20:27:43.981381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.489 [2024-11-26 20:27:43.993231] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.489 [2024-11-26 20:27:43.993343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.489 [2024-11-26 20:27:43.993385] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.489 [2024-11-26 20:27:43.993436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.489 [2024-11-26 20:27:43.993479] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.489 [2024-11-26 20:27:43.993515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.489 [2024-11-26 20:27:43.993555] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:50.489 [2024-11-26 20:27:43.993606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.489 20:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.754 [2024-11-26 20:27:44.047866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.754 BaseBdev1 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.754 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.754 [ 00:14:50.754 { 00:14:50.754 "name": "BaseBdev1", 00:14:50.754 "aliases": [ 00:14:50.755 "9559f350-30e8-4f4b-ac5b-90436782f956" 00:14:50.755 ], 00:14:50.755 "product_name": "Malloc disk", 00:14:50.755 "block_size": 512, 00:14:50.755 "num_blocks": 65536, 00:14:50.755 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:50.755 "assigned_rate_limits": { 00:14:50.755 "rw_ios_per_sec": 0, 00:14:50.755 "rw_mbytes_per_sec": 0, 00:14:50.755 "r_mbytes_per_sec": 0, 00:14:50.755 "w_mbytes_per_sec": 0 00:14:50.755 }, 00:14:50.755 "claimed": true, 00:14:50.755 "claim_type": "exclusive_write", 00:14:50.755 "zoned": false, 00:14:50.755 "supported_io_types": { 00:14:50.755 "read": true, 00:14:50.755 "write": true, 00:14:50.755 "unmap": true, 00:14:50.755 "flush": true, 00:14:50.755 "reset": true, 00:14:50.755 "nvme_admin": false, 00:14:50.755 "nvme_io": false, 00:14:50.755 "nvme_io_md": false, 00:14:50.755 "write_zeroes": true, 00:14:50.755 "zcopy": true, 00:14:50.755 "get_zone_info": false, 00:14:50.755 "zone_management": false, 00:14:50.755 "zone_append": false, 00:14:50.755 "compare": false, 00:14:50.755 "compare_and_write": false, 00:14:50.755 "abort": true, 00:14:50.755 "seek_hole": false, 00:14:50.755 "seek_data": false, 00:14:50.755 "copy": true, 00:14:50.755 "nvme_iov_md": false 00:14:50.755 }, 00:14:50.755 "memory_domains": [ 00:14:50.755 { 00:14:50.755 "dma_device_id": "system", 00:14:50.755 "dma_device_type": 1 00:14:50.755 }, 00:14:50.755 { 00:14:50.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.755 "dma_device_type": 2 00:14:50.755 } 00:14:50.755 ], 00:14:50.755 "driver_specific": {} 00:14:50.755 } 00:14:50.755 ] 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.755 "name": "Existed_Raid", 00:14:50.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.755 "strip_size_kb": 0, 00:14:50.755 "state": "configuring", 00:14:50.755 "raid_level": "raid1", 00:14:50.755 "superblock": false, 00:14:50.755 "num_base_bdevs": 4, 00:14:50.755 "num_base_bdevs_discovered": 1, 00:14:50.755 "num_base_bdevs_operational": 4, 00:14:50.755 "base_bdevs_list": [ 00:14:50.755 { 00:14:50.755 "name": "BaseBdev1", 00:14:50.755 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:50.755 "is_configured": true, 00:14:50.755 "data_offset": 0, 00:14:50.755 "data_size": 65536 00:14:50.755 }, 00:14:50.755 { 00:14:50.755 "name": "BaseBdev2", 00:14:50.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.755 "is_configured": false, 00:14:50.755 "data_offset": 0, 00:14:50.755 "data_size": 0 00:14:50.755 }, 00:14:50.755 { 00:14:50.755 "name": "BaseBdev3", 00:14:50.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.755 "is_configured": false, 00:14:50.755 "data_offset": 0, 00:14:50.755 "data_size": 0 00:14:50.755 }, 00:14:50.755 { 00:14:50.755 "name": "BaseBdev4", 00:14:50.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.755 "is_configured": false, 00:14:50.755 "data_offset": 0, 00:14:50.755 "data_size": 0 00:14:50.755 } 00:14:50.755 ] 00:14:50.755 }' 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.755 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 [2024-11-26 20:27:44.543099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.014 [2024-11-26 20:27:44.543219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 [2024-11-26 20:27:44.555185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.014 [2024-11-26 20:27:44.557433] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.014 [2024-11-26 20:27:44.557481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.014 [2024-11-26 20:27:44.557493] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.014 [2024-11-26 20:27:44.557506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.014 [2024-11-26 20:27:44.557514] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.014 [2024-11-26 20:27:44.557524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.014 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.274 "name": "Existed_Raid", 00:14:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.274 "strip_size_kb": 0, 00:14:51.274 "state": "configuring", 00:14:51.274 "raid_level": "raid1", 00:14:51.274 "superblock": false, 00:14:51.274 "num_base_bdevs": 4, 00:14:51.274 "num_base_bdevs_discovered": 1, 00:14:51.274 "num_base_bdevs_operational": 4, 00:14:51.274 "base_bdevs_list": [ 00:14:51.274 { 00:14:51.274 "name": "BaseBdev1", 00:14:51.274 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:51.274 "is_configured": true, 00:14:51.274 "data_offset": 0, 00:14:51.274 "data_size": 65536 00:14:51.274 }, 00:14:51.274 { 00:14:51.274 "name": "BaseBdev2", 00:14:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.274 "is_configured": false, 00:14:51.274 "data_offset": 0, 00:14:51.274 "data_size": 0 00:14:51.274 }, 00:14:51.274 { 00:14:51.274 "name": "BaseBdev3", 00:14:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.274 "is_configured": false, 00:14:51.274 "data_offset": 0, 00:14:51.274 "data_size": 0 00:14:51.274 }, 00:14:51.274 { 00:14:51.274 "name": "BaseBdev4", 00:14:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.274 "is_configured": false, 00:14:51.274 "data_offset": 0, 00:14:51.274 "data_size": 0 00:14:51.274 } 00:14:51.274 ] 00:14:51.274 }' 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.274 20:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.534 [2024-11-26 20:27:45.075217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.534 BaseBdev2 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.534 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.795 [ 00:14:51.795 { 00:14:51.795 "name": "BaseBdev2", 00:14:51.795 "aliases": [ 00:14:51.795 "7cb361db-f454-4475-a439-bc85dcc65037" 00:14:51.795 ], 00:14:51.795 "product_name": "Malloc disk", 00:14:51.795 "block_size": 512, 00:14:51.795 "num_blocks": 65536, 00:14:51.795 "uuid": "7cb361db-f454-4475-a439-bc85dcc65037", 00:14:51.795 "assigned_rate_limits": { 00:14:51.795 "rw_ios_per_sec": 0, 00:14:51.795 "rw_mbytes_per_sec": 0, 00:14:51.795 "r_mbytes_per_sec": 0, 00:14:51.795 "w_mbytes_per_sec": 0 00:14:51.795 }, 00:14:51.795 "claimed": true, 00:14:51.795 "claim_type": "exclusive_write", 00:14:51.795 "zoned": false, 00:14:51.795 "supported_io_types": { 00:14:51.795 "read": true, 00:14:51.795 "write": true, 00:14:51.795 "unmap": true, 00:14:51.795 "flush": true, 00:14:51.795 "reset": true, 00:14:51.795 "nvme_admin": false, 00:14:51.795 "nvme_io": false, 00:14:51.795 "nvme_io_md": false, 00:14:51.795 "write_zeroes": true, 00:14:51.795 "zcopy": true, 00:14:51.795 "get_zone_info": false, 00:14:51.795 "zone_management": false, 00:14:51.795 "zone_append": false, 00:14:51.795 "compare": false, 00:14:51.795 "compare_and_write": false, 00:14:51.795 "abort": true, 00:14:51.795 "seek_hole": false, 00:14:51.795 "seek_data": false, 00:14:51.795 "copy": true, 00:14:51.795 "nvme_iov_md": false 00:14:51.795 }, 00:14:51.795 "memory_domains": [ 00:14:51.795 { 00:14:51.795 "dma_device_id": "system", 00:14:51.795 "dma_device_type": 1 00:14:51.795 }, 00:14:51.795 { 00:14:51.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.795 "dma_device_type": 2 00:14:51.795 } 00:14:51.795 ], 00:14:51.795 "driver_specific": {} 00:14:51.795 } 00:14:51.795 ] 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.795 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.796 "name": "Existed_Raid", 00:14:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.796 "strip_size_kb": 0, 00:14:51.796 "state": "configuring", 00:14:51.796 "raid_level": "raid1", 00:14:51.796 "superblock": false, 00:14:51.796 "num_base_bdevs": 4, 00:14:51.796 "num_base_bdevs_discovered": 2, 00:14:51.796 "num_base_bdevs_operational": 4, 00:14:51.796 "base_bdevs_list": [ 00:14:51.796 { 00:14:51.796 "name": "BaseBdev1", 00:14:51.796 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:51.796 "is_configured": true, 00:14:51.796 "data_offset": 0, 00:14:51.796 "data_size": 65536 00:14:51.796 }, 00:14:51.796 { 00:14:51.796 "name": "BaseBdev2", 00:14:51.796 "uuid": "7cb361db-f454-4475-a439-bc85dcc65037", 00:14:51.796 "is_configured": true, 00:14:51.796 "data_offset": 0, 00:14:51.796 "data_size": 65536 00:14:51.796 }, 00:14:51.796 { 00:14:51.796 "name": "BaseBdev3", 00:14:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.796 "is_configured": false, 00:14:51.796 "data_offset": 0, 00:14:51.796 "data_size": 0 00:14:51.796 }, 00:14:51.796 { 00:14:51.796 "name": "BaseBdev4", 00:14:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.796 "is_configured": false, 00:14:51.796 "data_offset": 0, 00:14:51.796 "data_size": 0 00:14:51.796 } 00:14:51.796 ] 00:14:51.796 }' 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.796 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.055 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.055 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.055 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.314 [2024-11-26 20:27:45.634302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.314 BaseBdev3 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.314 [ 00:14:52.314 { 00:14:52.314 "name": "BaseBdev3", 00:14:52.314 "aliases": [ 00:14:52.314 "43f27da1-c2fd-4a2e-a8e1-86aee32080fa" 00:14:52.314 ], 00:14:52.314 "product_name": "Malloc disk", 00:14:52.314 "block_size": 512, 00:14:52.314 "num_blocks": 65536, 00:14:52.314 "uuid": "43f27da1-c2fd-4a2e-a8e1-86aee32080fa", 00:14:52.314 "assigned_rate_limits": { 00:14:52.314 "rw_ios_per_sec": 0, 00:14:52.314 "rw_mbytes_per_sec": 0, 00:14:52.314 "r_mbytes_per_sec": 0, 00:14:52.314 "w_mbytes_per_sec": 0 00:14:52.314 }, 00:14:52.314 "claimed": true, 00:14:52.314 "claim_type": "exclusive_write", 00:14:52.314 "zoned": false, 00:14:52.314 "supported_io_types": { 00:14:52.314 "read": true, 00:14:52.314 "write": true, 00:14:52.314 "unmap": true, 00:14:52.314 "flush": true, 00:14:52.314 "reset": true, 00:14:52.314 "nvme_admin": false, 00:14:52.314 "nvme_io": false, 00:14:52.314 "nvme_io_md": false, 00:14:52.314 "write_zeroes": true, 00:14:52.314 "zcopy": true, 00:14:52.314 "get_zone_info": false, 00:14:52.314 "zone_management": false, 00:14:52.314 "zone_append": false, 00:14:52.314 "compare": false, 00:14:52.314 "compare_and_write": false, 00:14:52.314 "abort": true, 00:14:52.314 "seek_hole": false, 00:14:52.314 "seek_data": false, 00:14:52.314 "copy": true, 00:14:52.314 "nvme_iov_md": false 00:14:52.314 }, 00:14:52.314 "memory_domains": [ 00:14:52.314 { 00:14:52.314 "dma_device_id": "system", 00:14:52.314 "dma_device_type": 1 00:14:52.314 }, 00:14:52.314 { 00:14:52.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.314 "dma_device_type": 2 00:14:52.314 } 00:14:52.314 ], 00:14:52.314 "driver_specific": {} 00:14:52.314 } 00:14:52.314 ] 00:14:52.314 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.315 "name": "Existed_Raid", 00:14:52.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.315 "strip_size_kb": 0, 00:14:52.315 "state": "configuring", 00:14:52.315 "raid_level": "raid1", 00:14:52.315 "superblock": false, 00:14:52.315 "num_base_bdevs": 4, 00:14:52.315 "num_base_bdevs_discovered": 3, 00:14:52.315 "num_base_bdevs_operational": 4, 00:14:52.315 "base_bdevs_list": [ 00:14:52.315 { 00:14:52.315 "name": "BaseBdev1", 00:14:52.315 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:52.315 "is_configured": true, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 65536 00:14:52.315 }, 00:14:52.315 { 00:14:52.315 "name": "BaseBdev2", 00:14:52.315 "uuid": "7cb361db-f454-4475-a439-bc85dcc65037", 00:14:52.315 "is_configured": true, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 65536 00:14:52.315 }, 00:14:52.315 { 00:14:52.315 "name": "BaseBdev3", 00:14:52.315 "uuid": "43f27da1-c2fd-4a2e-a8e1-86aee32080fa", 00:14:52.315 "is_configured": true, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 65536 00:14:52.315 }, 00:14:52.315 { 00:14:52.315 "name": "BaseBdev4", 00:14:52.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.315 "is_configured": false, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 0 00:14:52.315 } 00:14:52.315 ] 00:14:52.315 }' 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.315 20:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.574 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:52.574 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.574 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.834 [2024-11-26 20:27:46.168800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.834 [2024-11-26 20:27:46.168968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:52.834 [2024-11-26 20:27:46.169000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:52.834 [2024-11-26 20:27:46.169427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:52.834 [2024-11-26 20:27:46.169693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:52.834 [2024-11-26 20:27:46.169749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:52.834 [2024-11-26 20:27:46.170086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.834 BaseBdev4 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.834 [ 00:14:52.834 { 00:14:52.834 "name": "BaseBdev4", 00:14:52.834 "aliases": [ 00:14:52.834 "3a99130c-1a50-4e46-bf0c-2015a449a5c0" 00:14:52.834 ], 00:14:52.834 "product_name": "Malloc disk", 00:14:52.834 "block_size": 512, 00:14:52.834 "num_blocks": 65536, 00:14:52.834 "uuid": "3a99130c-1a50-4e46-bf0c-2015a449a5c0", 00:14:52.834 "assigned_rate_limits": { 00:14:52.834 "rw_ios_per_sec": 0, 00:14:52.834 "rw_mbytes_per_sec": 0, 00:14:52.834 "r_mbytes_per_sec": 0, 00:14:52.834 "w_mbytes_per_sec": 0 00:14:52.834 }, 00:14:52.834 "claimed": true, 00:14:52.834 "claim_type": "exclusive_write", 00:14:52.834 "zoned": false, 00:14:52.834 "supported_io_types": { 00:14:52.834 "read": true, 00:14:52.834 "write": true, 00:14:52.834 "unmap": true, 00:14:52.834 "flush": true, 00:14:52.834 "reset": true, 00:14:52.834 "nvme_admin": false, 00:14:52.834 "nvme_io": false, 00:14:52.834 "nvme_io_md": false, 00:14:52.834 "write_zeroes": true, 00:14:52.834 "zcopy": true, 00:14:52.834 "get_zone_info": false, 00:14:52.834 "zone_management": false, 00:14:52.834 "zone_append": false, 00:14:52.834 "compare": false, 00:14:52.834 "compare_and_write": false, 00:14:52.834 "abort": true, 00:14:52.834 "seek_hole": false, 00:14:52.834 "seek_data": false, 00:14:52.834 "copy": true, 00:14:52.834 "nvme_iov_md": false 00:14:52.834 }, 00:14:52.834 "memory_domains": [ 00:14:52.834 { 00:14:52.834 "dma_device_id": "system", 00:14:52.834 "dma_device_type": 1 00:14:52.834 }, 00:14:52.834 { 00:14:52.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.834 "dma_device_type": 2 00:14:52.834 } 00:14:52.834 ], 00:14:52.834 "driver_specific": {} 00:14:52.834 } 00:14:52.834 ] 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.834 "name": "Existed_Raid", 00:14:52.834 "uuid": "4532e571-7a3d-4096-b94b-46555f018a15", 00:14:52.834 "strip_size_kb": 0, 00:14:52.834 "state": "online", 00:14:52.834 "raid_level": "raid1", 00:14:52.834 "superblock": false, 00:14:52.834 "num_base_bdevs": 4, 00:14:52.834 "num_base_bdevs_discovered": 4, 00:14:52.834 "num_base_bdevs_operational": 4, 00:14:52.834 "base_bdevs_list": [ 00:14:52.834 { 00:14:52.834 "name": "BaseBdev1", 00:14:52.834 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:52.834 "is_configured": true, 00:14:52.834 "data_offset": 0, 00:14:52.834 "data_size": 65536 00:14:52.834 }, 00:14:52.834 { 00:14:52.834 "name": "BaseBdev2", 00:14:52.834 "uuid": "7cb361db-f454-4475-a439-bc85dcc65037", 00:14:52.834 "is_configured": true, 00:14:52.834 "data_offset": 0, 00:14:52.834 "data_size": 65536 00:14:52.834 }, 00:14:52.834 { 00:14:52.834 "name": "BaseBdev3", 00:14:52.834 "uuid": "43f27da1-c2fd-4a2e-a8e1-86aee32080fa", 00:14:52.834 "is_configured": true, 00:14:52.834 "data_offset": 0, 00:14:52.834 "data_size": 65536 00:14:52.834 }, 00:14:52.834 { 00:14:52.834 "name": "BaseBdev4", 00:14:52.834 "uuid": "3a99130c-1a50-4e46-bf0c-2015a449a5c0", 00:14:52.834 "is_configured": true, 00:14:52.834 "data_offset": 0, 00:14:52.834 "data_size": 65536 00:14:52.834 } 00:14:52.834 ] 00:14:52.834 }' 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.834 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.402 [2024-11-26 20:27:46.736450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.402 "name": "Existed_Raid", 00:14:53.402 "aliases": [ 00:14:53.402 "4532e571-7a3d-4096-b94b-46555f018a15" 00:14:53.402 ], 00:14:53.402 "product_name": "Raid Volume", 00:14:53.402 "block_size": 512, 00:14:53.402 "num_blocks": 65536, 00:14:53.402 "uuid": "4532e571-7a3d-4096-b94b-46555f018a15", 00:14:53.402 "assigned_rate_limits": { 00:14:53.402 "rw_ios_per_sec": 0, 00:14:53.402 "rw_mbytes_per_sec": 0, 00:14:53.402 "r_mbytes_per_sec": 0, 00:14:53.402 "w_mbytes_per_sec": 0 00:14:53.402 }, 00:14:53.402 "claimed": false, 00:14:53.402 "zoned": false, 00:14:53.402 "supported_io_types": { 00:14:53.402 "read": true, 00:14:53.402 "write": true, 00:14:53.402 "unmap": false, 00:14:53.402 "flush": false, 00:14:53.402 "reset": true, 00:14:53.402 "nvme_admin": false, 00:14:53.402 "nvme_io": false, 00:14:53.402 "nvme_io_md": false, 00:14:53.402 "write_zeroes": true, 00:14:53.402 "zcopy": false, 00:14:53.402 "get_zone_info": false, 00:14:53.402 "zone_management": false, 00:14:53.402 "zone_append": false, 00:14:53.402 "compare": false, 00:14:53.402 "compare_and_write": false, 00:14:53.402 "abort": false, 00:14:53.402 "seek_hole": false, 00:14:53.402 "seek_data": false, 00:14:53.402 "copy": false, 00:14:53.402 "nvme_iov_md": false 00:14:53.402 }, 00:14:53.402 "memory_domains": [ 00:14:53.402 { 00:14:53.402 "dma_device_id": "system", 00:14:53.402 "dma_device_type": 1 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.402 "dma_device_type": 2 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "system", 00:14:53.402 "dma_device_type": 1 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.402 "dma_device_type": 2 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "system", 00:14:53.402 "dma_device_type": 1 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.402 "dma_device_type": 2 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "system", 00:14:53.402 "dma_device_type": 1 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.402 "dma_device_type": 2 00:14:53.402 } 00:14:53.402 ], 00:14:53.402 "driver_specific": { 00:14:53.402 "raid": { 00:14:53.402 "uuid": "4532e571-7a3d-4096-b94b-46555f018a15", 00:14:53.402 "strip_size_kb": 0, 00:14:53.402 "state": "online", 00:14:53.402 "raid_level": "raid1", 00:14:53.402 "superblock": false, 00:14:53.402 "num_base_bdevs": 4, 00:14:53.402 "num_base_bdevs_discovered": 4, 00:14:53.402 "num_base_bdevs_operational": 4, 00:14:53.402 "base_bdevs_list": [ 00:14:53.402 { 00:14:53.402 "name": "BaseBdev1", 00:14:53.402 "uuid": "9559f350-30e8-4f4b-ac5b-90436782f956", 00:14:53.402 "is_configured": true, 00:14:53.402 "data_offset": 0, 00:14:53.402 "data_size": 65536 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "name": "BaseBdev2", 00:14:53.402 "uuid": "7cb361db-f454-4475-a439-bc85dcc65037", 00:14:53.402 "is_configured": true, 00:14:53.402 "data_offset": 0, 00:14:53.402 "data_size": 65536 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "name": "BaseBdev3", 00:14:53.402 "uuid": "43f27da1-c2fd-4a2e-a8e1-86aee32080fa", 00:14:53.402 "is_configured": true, 00:14:53.402 "data_offset": 0, 00:14:53.402 "data_size": 65536 00:14:53.402 }, 00:14:53.402 { 00:14:53.402 "name": "BaseBdev4", 00:14:53.402 "uuid": "3a99130c-1a50-4e46-bf0c-2015a449a5c0", 00:14:53.402 "is_configured": true, 00:14:53.402 "data_offset": 0, 00:14:53.402 "data_size": 65536 00:14:53.402 } 00:14:53.402 ] 00:14:53.402 } 00:14:53.402 } 00:14:53.402 }' 00:14:53.402 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.403 BaseBdev2 00:14:53.403 BaseBdev3 00:14:53.403 BaseBdev4' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.403 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 20:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 [2024-11-26 20:27:47.067549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.934 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.934 "name": "Existed_Raid", 00:14:53.934 "uuid": "4532e571-7a3d-4096-b94b-46555f018a15", 00:14:53.934 "strip_size_kb": 0, 00:14:53.934 "state": "online", 00:14:53.934 "raid_level": "raid1", 00:14:53.934 "superblock": false, 00:14:53.934 "num_base_bdevs": 4, 00:14:53.934 "num_base_bdevs_discovered": 3, 00:14:53.934 "num_base_bdevs_operational": 3, 00:14:53.934 "base_bdevs_list": [ 00:14:53.934 { 00:14:53.934 "name": null, 00:14:53.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.934 "is_configured": false, 00:14:53.934 "data_offset": 0, 00:14:53.934 "data_size": 65536 00:14:53.934 }, 00:14:53.934 { 00:14:53.934 "name": "BaseBdev2", 00:14:53.934 "uuid": "7cb361db-f454-4475-a439-bc85dcc65037", 00:14:53.934 "is_configured": true, 00:14:53.934 "data_offset": 0, 00:14:53.934 "data_size": 65536 00:14:53.934 }, 00:14:53.934 { 00:14:53.934 "name": "BaseBdev3", 00:14:53.934 "uuid": "43f27da1-c2fd-4a2e-a8e1-86aee32080fa", 00:14:53.934 "is_configured": true, 00:14:53.934 "data_offset": 0, 00:14:53.934 "data_size": 65536 00:14:53.934 }, 00:14:53.934 { 00:14:53.934 "name": "BaseBdev4", 00:14:53.934 "uuid": "3a99130c-1a50-4e46-bf0c-2015a449a5c0", 00:14:53.934 "is_configured": true, 00:14:53.934 "data_offset": 0, 00:14:53.934 "data_size": 65536 00:14:53.934 } 00:14:53.934 ] 00:14:53.934 }' 00:14:53.934 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.934 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.193 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.193 [2024-11-26 20:27:47.678917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.461 [2024-11-26 20:27:47.842644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.461 20:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.461 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.461 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.461 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:54.461 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.461 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.461 [2024-11-26 20:27:48.011439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:54.461 [2024-11-26 20:27:48.011631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.727 [2024-11-26 20:27:48.118421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.727 [2024-11-26 20:27:48.118568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.727 [2024-11-26 20:27:48.118620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:54.727 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.727 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.727 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.727 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.727 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.727 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.728 BaseBdev2 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.728 [ 00:14:54.728 { 00:14:54.728 "name": "BaseBdev2", 00:14:54.728 "aliases": [ 00:14:54.728 "6768d65c-0419-42c8-83f0-0e403fc1a560" 00:14:54.728 ], 00:14:54.728 "product_name": "Malloc disk", 00:14:54.728 "block_size": 512, 00:14:54.728 "num_blocks": 65536, 00:14:54.728 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:54.728 "assigned_rate_limits": { 00:14:54.728 "rw_ios_per_sec": 0, 00:14:54.728 "rw_mbytes_per_sec": 0, 00:14:54.728 "r_mbytes_per_sec": 0, 00:14:54.728 "w_mbytes_per_sec": 0 00:14:54.728 }, 00:14:54.728 "claimed": false, 00:14:54.728 "zoned": false, 00:14:54.728 "supported_io_types": { 00:14:54.728 "read": true, 00:14:54.728 "write": true, 00:14:54.728 "unmap": true, 00:14:54.728 "flush": true, 00:14:54.728 "reset": true, 00:14:54.728 "nvme_admin": false, 00:14:54.728 "nvme_io": false, 00:14:54.728 "nvme_io_md": false, 00:14:54.728 "write_zeroes": true, 00:14:54.728 "zcopy": true, 00:14:54.728 "get_zone_info": false, 00:14:54.728 "zone_management": false, 00:14:54.728 "zone_append": false, 00:14:54.728 "compare": false, 00:14:54.728 "compare_and_write": false, 00:14:54.728 "abort": true, 00:14:54.728 "seek_hole": false, 00:14:54.728 "seek_data": false, 00:14:54.728 "copy": true, 00:14:54.728 "nvme_iov_md": false 00:14:54.728 }, 00:14:54.728 "memory_domains": [ 00:14:54.728 { 00:14:54.728 "dma_device_id": "system", 00:14:54.728 "dma_device_type": 1 00:14:54.728 }, 00:14:54.728 { 00:14:54.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.728 "dma_device_type": 2 00:14:54.728 } 00:14:54.728 ], 00:14:54.728 "driver_specific": {} 00:14:54.728 } 00:14:54.728 ] 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.728 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 BaseBdev3 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 [ 00:14:54.987 { 00:14:54.987 "name": "BaseBdev3", 00:14:54.987 "aliases": [ 00:14:54.987 "99f336bd-7cde-4582-8360-4babfae5c282" 00:14:54.987 ], 00:14:54.987 "product_name": "Malloc disk", 00:14:54.987 "block_size": 512, 00:14:54.987 "num_blocks": 65536, 00:14:54.987 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:54.987 "assigned_rate_limits": { 00:14:54.987 "rw_ios_per_sec": 0, 00:14:54.987 "rw_mbytes_per_sec": 0, 00:14:54.987 "r_mbytes_per_sec": 0, 00:14:54.987 "w_mbytes_per_sec": 0 00:14:54.987 }, 00:14:54.987 "claimed": false, 00:14:54.987 "zoned": false, 00:14:54.987 "supported_io_types": { 00:14:54.987 "read": true, 00:14:54.987 "write": true, 00:14:54.987 "unmap": true, 00:14:54.987 "flush": true, 00:14:54.987 "reset": true, 00:14:54.987 "nvme_admin": false, 00:14:54.987 "nvme_io": false, 00:14:54.987 "nvme_io_md": false, 00:14:54.987 "write_zeroes": true, 00:14:54.987 "zcopy": true, 00:14:54.987 "get_zone_info": false, 00:14:54.987 "zone_management": false, 00:14:54.987 "zone_append": false, 00:14:54.987 "compare": false, 00:14:54.987 "compare_and_write": false, 00:14:54.987 "abort": true, 00:14:54.987 "seek_hole": false, 00:14:54.987 "seek_data": false, 00:14:54.987 "copy": true, 00:14:54.987 "nvme_iov_md": false 00:14:54.987 }, 00:14:54.987 "memory_domains": [ 00:14:54.987 { 00:14:54.987 "dma_device_id": "system", 00:14:54.987 "dma_device_type": 1 00:14:54.987 }, 00:14:54.987 { 00:14:54.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.987 "dma_device_type": 2 00:14:54.987 } 00:14:54.987 ], 00:14:54.987 "driver_specific": {} 00:14:54.987 } 00:14:54.987 ] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 BaseBdev4 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.987 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.987 [ 00:14:54.987 { 00:14:54.987 "name": "BaseBdev4", 00:14:54.987 "aliases": [ 00:14:54.987 "9186921d-9248-4f7d-886f-b18499df34ba" 00:14:54.987 ], 00:14:54.987 "product_name": "Malloc disk", 00:14:54.987 "block_size": 512, 00:14:54.987 "num_blocks": 65536, 00:14:54.987 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:54.987 "assigned_rate_limits": { 00:14:54.987 "rw_ios_per_sec": 0, 00:14:54.987 "rw_mbytes_per_sec": 0, 00:14:54.987 "r_mbytes_per_sec": 0, 00:14:54.987 "w_mbytes_per_sec": 0 00:14:54.987 }, 00:14:54.988 "claimed": false, 00:14:54.988 "zoned": false, 00:14:54.988 "supported_io_types": { 00:14:54.988 "read": true, 00:14:54.988 "write": true, 00:14:54.988 "unmap": true, 00:14:54.988 "flush": true, 00:14:54.988 "reset": true, 00:14:54.988 "nvme_admin": false, 00:14:54.988 "nvme_io": false, 00:14:54.988 "nvme_io_md": false, 00:14:54.988 "write_zeroes": true, 00:14:54.988 "zcopy": true, 00:14:54.988 "get_zone_info": false, 00:14:54.988 "zone_management": false, 00:14:54.988 "zone_append": false, 00:14:54.988 "compare": false, 00:14:54.988 "compare_and_write": false, 00:14:54.988 "abort": true, 00:14:54.988 "seek_hole": false, 00:14:54.988 "seek_data": false, 00:14:54.988 "copy": true, 00:14:54.988 "nvme_iov_md": false 00:14:54.988 }, 00:14:54.988 "memory_domains": [ 00:14:54.988 { 00:14:54.988 "dma_device_id": "system", 00:14:54.988 "dma_device_type": 1 00:14:54.988 }, 00:14:54.988 { 00:14:54.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.988 "dma_device_type": 2 00:14:54.988 } 00:14:54.988 ], 00:14:54.988 "driver_specific": {} 00:14:54.988 } 00:14:54.988 ] 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.988 [2024-11-26 20:27:48.428037] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.988 [2024-11-26 20:27:48.428153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.988 [2024-11-26 20:27:48.428213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.988 [2024-11-26 20:27:48.430238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.988 [2024-11-26 20:27:48.430366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.988 "name": "Existed_Raid", 00:14:54.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.988 "strip_size_kb": 0, 00:14:54.988 "state": "configuring", 00:14:54.988 "raid_level": "raid1", 00:14:54.988 "superblock": false, 00:14:54.988 "num_base_bdevs": 4, 00:14:54.988 "num_base_bdevs_discovered": 3, 00:14:54.988 "num_base_bdevs_operational": 4, 00:14:54.988 "base_bdevs_list": [ 00:14:54.988 { 00:14:54.988 "name": "BaseBdev1", 00:14:54.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.988 "is_configured": false, 00:14:54.988 "data_offset": 0, 00:14:54.988 "data_size": 0 00:14:54.988 }, 00:14:54.988 { 00:14:54.988 "name": "BaseBdev2", 00:14:54.988 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:54.988 "is_configured": true, 00:14:54.988 "data_offset": 0, 00:14:54.988 "data_size": 65536 00:14:54.988 }, 00:14:54.988 { 00:14:54.988 "name": "BaseBdev3", 00:14:54.988 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:54.988 "is_configured": true, 00:14:54.988 "data_offset": 0, 00:14:54.988 "data_size": 65536 00:14:54.988 }, 00:14:54.988 { 00:14:54.988 "name": "BaseBdev4", 00:14:54.988 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:54.988 "is_configured": true, 00:14:54.988 "data_offset": 0, 00:14:54.988 "data_size": 65536 00:14:54.988 } 00:14:54.988 ] 00:14:54.988 }' 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.988 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.556 [2024-11-26 20:27:48.883307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.556 "name": "Existed_Raid", 00:14:55.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.556 "strip_size_kb": 0, 00:14:55.556 "state": "configuring", 00:14:55.556 "raid_level": "raid1", 00:14:55.556 "superblock": false, 00:14:55.556 "num_base_bdevs": 4, 00:14:55.556 "num_base_bdevs_discovered": 2, 00:14:55.556 "num_base_bdevs_operational": 4, 00:14:55.556 "base_bdevs_list": [ 00:14:55.556 { 00:14:55.556 "name": "BaseBdev1", 00:14:55.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.556 "is_configured": false, 00:14:55.556 "data_offset": 0, 00:14:55.556 "data_size": 0 00:14:55.556 }, 00:14:55.556 { 00:14:55.556 "name": null, 00:14:55.556 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:55.556 "is_configured": false, 00:14:55.556 "data_offset": 0, 00:14:55.556 "data_size": 65536 00:14:55.556 }, 00:14:55.556 { 00:14:55.556 "name": "BaseBdev3", 00:14:55.556 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:55.556 "is_configured": true, 00:14:55.556 "data_offset": 0, 00:14:55.556 "data_size": 65536 00:14:55.556 }, 00:14:55.556 { 00:14:55.556 "name": "BaseBdev4", 00:14:55.556 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:55.556 "is_configured": true, 00:14:55.556 "data_offset": 0, 00:14:55.556 "data_size": 65536 00:14:55.556 } 00:14:55.556 ] 00:14:55.556 }' 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.556 20:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.129 [2024-11-26 20:27:49.499027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.129 BaseBdev1 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.129 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.130 [ 00:14:56.130 { 00:14:56.130 "name": "BaseBdev1", 00:14:56.130 "aliases": [ 00:14:56.130 "07212a60-d42c-40a2-b4e6-8b5fff152642" 00:14:56.130 ], 00:14:56.130 "product_name": "Malloc disk", 00:14:56.130 "block_size": 512, 00:14:56.130 "num_blocks": 65536, 00:14:56.130 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:56.130 "assigned_rate_limits": { 00:14:56.130 "rw_ios_per_sec": 0, 00:14:56.130 "rw_mbytes_per_sec": 0, 00:14:56.130 "r_mbytes_per_sec": 0, 00:14:56.130 "w_mbytes_per_sec": 0 00:14:56.130 }, 00:14:56.130 "claimed": true, 00:14:56.130 "claim_type": "exclusive_write", 00:14:56.130 "zoned": false, 00:14:56.130 "supported_io_types": { 00:14:56.130 "read": true, 00:14:56.130 "write": true, 00:14:56.130 "unmap": true, 00:14:56.130 "flush": true, 00:14:56.130 "reset": true, 00:14:56.130 "nvme_admin": false, 00:14:56.130 "nvme_io": false, 00:14:56.130 "nvme_io_md": false, 00:14:56.130 "write_zeroes": true, 00:14:56.130 "zcopy": true, 00:14:56.130 "get_zone_info": false, 00:14:56.130 "zone_management": false, 00:14:56.130 "zone_append": false, 00:14:56.130 "compare": false, 00:14:56.130 "compare_and_write": false, 00:14:56.130 "abort": true, 00:14:56.130 "seek_hole": false, 00:14:56.130 "seek_data": false, 00:14:56.130 "copy": true, 00:14:56.130 "nvme_iov_md": false 00:14:56.130 }, 00:14:56.130 "memory_domains": [ 00:14:56.130 { 00:14:56.130 "dma_device_id": "system", 00:14:56.130 "dma_device_type": 1 00:14:56.130 }, 00:14:56.130 { 00:14:56.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.130 "dma_device_type": 2 00:14:56.130 } 00:14:56.130 ], 00:14:56.130 "driver_specific": {} 00:14:56.130 } 00:14:56.130 ] 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.130 "name": "Existed_Raid", 00:14:56.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.130 "strip_size_kb": 0, 00:14:56.130 "state": "configuring", 00:14:56.130 "raid_level": "raid1", 00:14:56.130 "superblock": false, 00:14:56.130 "num_base_bdevs": 4, 00:14:56.130 "num_base_bdevs_discovered": 3, 00:14:56.130 "num_base_bdevs_operational": 4, 00:14:56.130 "base_bdevs_list": [ 00:14:56.130 { 00:14:56.130 "name": "BaseBdev1", 00:14:56.130 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:56.130 "is_configured": true, 00:14:56.130 "data_offset": 0, 00:14:56.130 "data_size": 65536 00:14:56.130 }, 00:14:56.130 { 00:14:56.130 "name": null, 00:14:56.130 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:56.130 "is_configured": false, 00:14:56.130 "data_offset": 0, 00:14:56.130 "data_size": 65536 00:14:56.130 }, 00:14:56.130 { 00:14:56.130 "name": "BaseBdev3", 00:14:56.130 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:56.130 "is_configured": true, 00:14:56.130 "data_offset": 0, 00:14:56.130 "data_size": 65536 00:14:56.130 }, 00:14:56.130 { 00:14:56.130 "name": "BaseBdev4", 00:14:56.130 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:56.130 "is_configured": true, 00:14:56.130 "data_offset": 0, 00:14:56.130 "data_size": 65536 00:14:56.130 } 00:14:56.130 ] 00:14:56.130 }' 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.130 20:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.698 [2024-11-26 20:27:50.086184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.698 "name": "Existed_Raid", 00:14:56.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.698 "strip_size_kb": 0, 00:14:56.698 "state": "configuring", 00:14:56.698 "raid_level": "raid1", 00:14:56.698 "superblock": false, 00:14:56.698 "num_base_bdevs": 4, 00:14:56.698 "num_base_bdevs_discovered": 2, 00:14:56.698 "num_base_bdevs_operational": 4, 00:14:56.698 "base_bdevs_list": [ 00:14:56.698 { 00:14:56.698 "name": "BaseBdev1", 00:14:56.698 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:56.698 "is_configured": true, 00:14:56.698 "data_offset": 0, 00:14:56.698 "data_size": 65536 00:14:56.698 }, 00:14:56.698 { 00:14:56.698 "name": null, 00:14:56.698 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:56.698 "is_configured": false, 00:14:56.698 "data_offset": 0, 00:14:56.698 "data_size": 65536 00:14:56.698 }, 00:14:56.698 { 00:14:56.698 "name": null, 00:14:56.698 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:56.698 "is_configured": false, 00:14:56.698 "data_offset": 0, 00:14:56.698 "data_size": 65536 00:14:56.698 }, 00:14:56.698 { 00:14:56.698 "name": "BaseBdev4", 00:14:56.698 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:56.698 "is_configured": true, 00:14:56.698 "data_offset": 0, 00:14:56.698 "data_size": 65536 00:14:56.698 } 00:14:56.698 ] 00:14:56.698 }' 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.698 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 [2024-11-26 20:27:50.609284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.267 "name": "Existed_Raid", 00:14:57.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.267 "strip_size_kb": 0, 00:14:57.267 "state": "configuring", 00:14:57.267 "raid_level": "raid1", 00:14:57.267 "superblock": false, 00:14:57.267 "num_base_bdevs": 4, 00:14:57.267 "num_base_bdevs_discovered": 3, 00:14:57.267 "num_base_bdevs_operational": 4, 00:14:57.267 "base_bdevs_list": [ 00:14:57.267 { 00:14:57.267 "name": "BaseBdev1", 00:14:57.267 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:57.267 "is_configured": true, 00:14:57.267 "data_offset": 0, 00:14:57.267 "data_size": 65536 00:14:57.267 }, 00:14:57.267 { 00:14:57.267 "name": null, 00:14:57.267 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:57.267 "is_configured": false, 00:14:57.267 "data_offset": 0, 00:14:57.267 "data_size": 65536 00:14:57.267 }, 00:14:57.267 { 00:14:57.267 "name": "BaseBdev3", 00:14:57.267 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:57.267 "is_configured": true, 00:14:57.267 "data_offset": 0, 00:14:57.267 "data_size": 65536 00:14:57.267 }, 00:14:57.267 { 00:14:57.267 "name": "BaseBdev4", 00:14:57.267 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:57.267 "is_configured": true, 00:14:57.267 "data_offset": 0, 00:14:57.267 "data_size": 65536 00:14:57.267 } 00:14:57.267 ] 00:14:57.267 }' 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.267 20:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.837 [2024-11-26 20:27:51.144496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.837 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.838 "name": "Existed_Raid", 00:14:57.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.838 "strip_size_kb": 0, 00:14:57.838 "state": "configuring", 00:14:57.838 "raid_level": "raid1", 00:14:57.838 "superblock": false, 00:14:57.838 "num_base_bdevs": 4, 00:14:57.838 "num_base_bdevs_discovered": 2, 00:14:57.838 "num_base_bdevs_operational": 4, 00:14:57.838 "base_bdevs_list": [ 00:14:57.838 { 00:14:57.838 "name": null, 00:14:57.838 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:57.838 "is_configured": false, 00:14:57.838 "data_offset": 0, 00:14:57.838 "data_size": 65536 00:14:57.838 }, 00:14:57.838 { 00:14:57.838 "name": null, 00:14:57.838 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:57.838 "is_configured": false, 00:14:57.838 "data_offset": 0, 00:14:57.838 "data_size": 65536 00:14:57.838 }, 00:14:57.838 { 00:14:57.838 "name": "BaseBdev3", 00:14:57.838 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:57.838 "is_configured": true, 00:14:57.838 "data_offset": 0, 00:14:57.838 "data_size": 65536 00:14:57.838 }, 00:14:57.838 { 00:14:57.838 "name": "BaseBdev4", 00:14:57.838 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:57.838 "is_configured": true, 00:14:57.838 "data_offset": 0, 00:14:57.838 "data_size": 65536 00:14:57.838 } 00:14:57.838 ] 00:14:57.838 }' 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.838 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 [2024-11-26 20:27:51.772128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.406 "name": "Existed_Raid", 00:14:58.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.406 "strip_size_kb": 0, 00:14:58.406 "state": "configuring", 00:14:58.406 "raid_level": "raid1", 00:14:58.406 "superblock": false, 00:14:58.406 "num_base_bdevs": 4, 00:14:58.406 "num_base_bdevs_discovered": 3, 00:14:58.406 "num_base_bdevs_operational": 4, 00:14:58.406 "base_bdevs_list": [ 00:14:58.406 { 00:14:58.406 "name": null, 00:14:58.406 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:58.406 "is_configured": false, 00:14:58.406 "data_offset": 0, 00:14:58.406 "data_size": 65536 00:14:58.406 }, 00:14:58.406 { 00:14:58.406 "name": "BaseBdev2", 00:14:58.406 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:58.406 "is_configured": true, 00:14:58.406 "data_offset": 0, 00:14:58.406 "data_size": 65536 00:14:58.406 }, 00:14:58.406 { 00:14:58.406 "name": "BaseBdev3", 00:14:58.406 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:58.406 "is_configured": true, 00:14:58.406 "data_offset": 0, 00:14:58.406 "data_size": 65536 00:14:58.406 }, 00:14:58.406 { 00:14:58.406 "name": "BaseBdev4", 00:14:58.406 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:58.406 "is_configured": true, 00:14:58.406 "data_offset": 0, 00:14:58.406 "data_size": 65536 00:14:58.406 } 00:14:58.406 ] 00:14:58.406 }' 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.406 20:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.666 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.666 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.666 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07212a60-d42c-40a2-b4e6-8b5fff152642 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 [2024-11-26 20:27:52.319130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.926 [2024-11-26 20:27:52.319184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:58.926 [2024-11-26 20:27:52.319193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:58.926 [2024-11-26 20:27:52.319608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:58.926 [2024-11-26 20:27:52.319848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:58.926 [2024-11-26 20:27:52.319865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:58.926 [2024-11-26 20:27:52.320160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.926 NewBaseBdev 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 [ 00:14:58.926 { 00:14:58.926 "name": "NewBaseBdev", 00:14:58.926 "aliases": [ 00:14:58.926 "07212a60-d42c-40a2-b4e6-8b5fff152642" 00:14:58.926 ], 00:14:58.926 "product_name": "Malloc disk", 00:14:58.926 "block_size": 512, 00:14:58.926 "num_blocks": 65536, 00:14:58.926 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:58.926 "assigned_rate_limits": { 00:14:58.926 "rw_ios_per_sec": 0, 00:14:58.926 "rw_mbytes_per_sec": 0, 00:14:58.926 "r_mbytes_per_sec": 0, 00:14:58.926 "w_mbytes_per_sec": 0 00:14:58.926 }, 00:14:58.926 "claimed": true, 00:14:58.926 "claim_type": "exclusive_write", 00:14:58.926 "zoned": false, 00:14:58.926 "supported_io_types": { 00:14:58.926 "read": true, 00:14:58.926 "write": true, 00:14:58.926 "unmap": true, 00:14:58.926 "flush": true, 00:14:58.926 "reset": true, 00:14:58.926 "nvme_admin": false, 00:14:58.926 "nvme_io": false, 00:14:58.926 "nvme_io_md": false, 00:14:58.926 "write_zeroes": true, 00:14:58.926 "zcopy": true, 00:14:58.926 "get_zone_info": false, 00:14:58.926 "zone_management": false, 00:14:58.926 "zone_append": false, 00:14:58.926 "compare": false, 00:14:58.926 "compare_and_write": false, 00:14:58.926 "abort": true, 00:14:58.926 "seek_hole": false, 00:14:58.926 "seek_data": false, 00:14:58.926 "copy": true, 00:14:58.926 "nvme_iov_md": false 00:14:58.926 }, 00:14:58.926 "memory_domains": [ 00:14:58.926 { 00:14:58.926 "dma_device_id": "system", 00:14:58.926 "dma_device_type": 1 00:14:58.926 }, 00:14:58.926 { 00:14:58.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.926 "dma_device_type": 2 00:14:58.926 } 00:14:58.926 ], 00:14:58.926 "driver_specific": {} 00:14:58.926 } 00:14:58.926 ] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.926 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.926 "name": "Existed_Raid", 00:14:58.926 "uuid": "b8d900bb-65d6-40c8-8038-6c1cd9d4bc6d", 00:14:58.926 "strip_size_kb": 0, 00:14:58.926 "state": "online", 00:14:58.926 "raid_level": "raid1", 00:14:58.926 "superblock": false, 00:14:58.926 "num_base_bdevs": 4, 00:14:58.926 "num_base_bdevs_discovered": 4, 00:14:58.926 "num_base_bdevs_operational": 4, 00:14:58.926 "base_bdevs_list": [ 00:14:58.926 { 00:14:58.926 "name": "NewBaseBdev", 00:14:58.926 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:58.926 "is_configured": true, 00:14:58.926 "data_offset": 0, 00:14:58.926 "data_size": 65536 00:14:58.926 }, 00:14:58.926 { 00:14:58.926 "name": "BaseBdev2", 00:14:58.926 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:58.926 "is_configured": true, 00:14:58.926 "data_offset": 0, 00:14:58.926 "data_size": 65536 00:14:58.926 }, 00:14:58.926 { 00:14:58.926 "name": "BaseBdev3", 00:14:58.926 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:58.926 "is_configured": true, 00:14:58.926 "data_offset": 0, 00:14:58.926 "data_size": 65536 00:14:58.926 }, 00:14:58.926 { 00:14:58.926 "name": "BaseBdev4", 00:14:58.927 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:58.927 "is_configured": true, 00:14:58.927 "data_offset": 0, 00:14:58.927 "data_size": 65536 00:14:58.927 } 00:14:58.927 ] 00:14:58.927 }' 00:14:58.927 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.927 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.497 [2024-11-26 20:27:52.826720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.497 "name": "Existed_Raid", 00:14:59.497 "aliases": [ 00:14:59.497 "b8d900bb-65d6-40c8-8038-6c1cd9d4bc6d" 00:14:59.497 ], 00:14:59.497 "product_name": "Raid Volume", 00:14:59.497 "block_size": 512, 00:14:59.497 "num_blocks": 65536, 00:14:59.497 "uuid": "b8d900bb-65d6-40c8-8038-6c1cd9d4bc6d", 00:14:59.497 "assigned_rate_limits": { 00:14:59.497 "rw_ios_per_sec": 0, 00:14:59.497 "rw_mbytes_per_sec": 0, 00:14:59.497 "r_mbytes_per_sec": 0, 00:14:59.497 "w_mbytes_per_sec": 0 00:14:59.497 }, 00:14:59.497 "claimed": false, 00:14:59.497 "zoned": false, 00:14:59.497 "supported_io_types": { 00:14:59.497 "read": true, 00:14:59.497 "write": true, 00:14:59.497 "unmap": false, 00:14:59.497 "flush": false, 00:14:59.497 "reset": true, 00:14:59.497 "nvme_admin": false, 00:14:59.497 "nvme_io": false, 00:14:59.497 "nvme_io_md": false, 00:14:59.497 "write_zeroes": true, 00:14:59.497 "zcopy": false, 00:14:59.497 "get_zone_info": false, 00:14:59.497 "zone_management": false, 00:14:59.497 "zone_append": false, 00:14:59.497 "compare": false, 00:14:59.497 "compare_and_write": false, 00:14:59.497 "abort": false, 00:14:59.497 "seek_hole": false, 00:14:59.497 "seek_data": false, 00:14:59.497 "copy": false, 00:14:59.497 "nvme_iov_md": false 00:14:59.497 }, 00:14:59.497 "memory_domains": [ 00:14:59.497 { 00:14:59.497 "dma_device_id": "system", 00:14:59.497 "dma_device_type": 1 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.497 "dma_device_type": 2 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "system", 00:14:59.497 "dma_device_type": 1 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.497 "dma_device_type": 2 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "system", 00:14:59.497 "dma_device_type": 1 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.497 "dma_device_type": 2 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "system", 00:14:59.497 "dma_device_type": 1 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.497 "dma_device_type": 2 00:14:59.497 } 00:14:59.497 ], 00:14:59.497 "driver_specific": { 00:14:59.497 "raid": { 00:14:59.497 "uuid": "b8d900bb-65d6-40c8-8038-6c1cd9d4bc6d", 00:14:59.497 "strip_size_kb": 0, 00:14:59.497 "state": "online", 00:14:59.497 "raid_level": "raid1", 00:14:59.497 "superblock": false, 00:14:59.497 "num_base_bdevs": 4, 00:14:59.497 "num_base_bdevs_discovered": 4, 00:14:59.497 "num_base_bdevs_operational": 4, 00:14:59.497 "base_bdevs_list": [ 00:14:59.497 { 00:14:59.497 "name": "NewBaseBdev", 00:14:59.497 "uuid": "07212a60-d42c-40a2-b4e6-8b5fff152642", 00:14:59.497 "is_configured": true, 00:14:59.497 "data_offset": 0, 00:14:59.497 "data_size": 65536 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "name": "BaseBdev2", 00:14:59.497 "uuid": "6768d65c-0419-42c8-83f0-0e403fc1a560", 00:14:59.497 "is_configured": true, 00:14:59.497 "data_offset": 0, 00:14:59.497 "data_size": 65536 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "name": "BaseBdev3", 00:14:59.497 "uuid": "99f336bd-7cde-4582-8360-4babfae5c282", 00:14:59.497 "is_configured": true, 00:14:59.497 "data_offset": 0, 00:14:59.497 "data_size": 65536 00:14:59.497 }, 00:14:59.497 { 00:14:59.497 "name": "BaseBdev4", 00:14:59.497 "uuid": "9186921d-9248-4f7d-886f-b18499df34ba", 00:14:59.497 "is_configured": true, 00:14:59.497 "data_offset": 0, 00:14:59.497 "data_size": 65536 00:14:59.497 } 00:14:59.497 ] 00:14:59.497 } 00:14:59.497 } 00:14:59.497 }' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.497 BaseBdev2 00:14:59.497 BaseBdev3 00:14:59.497 BaseBdev4' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.497 20:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.497 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.497 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.497 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.497 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.498 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.498 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.498 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.498 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.759 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.759 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.759 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.759 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.760 [2024-11-26 20:27:53.145788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.760 [2024-11-26 20:27:53.145819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.760 [2024-11-26 20:27:53.145910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.760 [2024-11-26 20:27:53.146231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.760 [2024-11-26 20:27:53.146261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73523 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73523 ']' 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73523 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73523 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73523' 00:14:59.760 killing process with pid 73523 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73523 00:14:59.760 [2024-11-26 20:27:53.196746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.760 20:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73523 00:15:00.329 [2024-11-26 20:27:53.621885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:01.708 00:15:01.708 real 0m12.344s 00:15:01.708 user 0m19.646s 00:15:01.708 sys 0m2.113s 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.708 ************************************ 00:15:01.708 END TEST raid_state_function_test 00:15:01.708 ************************************ 00:15:01.708 20:27:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:01.708 20:27:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:01.708 20:27:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.708 20:27:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.708 ************************************ 00:15:01.708 START TEST raid_state_function_test_sb 00:15:01.708 ************************************ 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74200 00:15:01.708 Process raid pid: 74200 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74200' 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74200 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74200 ']' 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.708 20:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.708 [2024-11-26 20:27:55.019045] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:15:01.708 [2024-11-26 20:27:55.019167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.708 [2024-11-26 20:27:55.177596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.967 [2024-11-26 20:27:55.302568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.226 [2024-11-26 20:27:55.527168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.226 [2024-11-26 20:27:55.527212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.485 [2024-11-26 20:27:55.915524] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.485 [2024-11-26 20:27:55.915632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.485 [2024-11-26 20:27:55.915648] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.485 [2024-11-26 20:27:55.915658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.485 [2024-11-26 20:27:55.915665] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:02.485 [2024-11-26 20:27:55.915690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:02.485 [2024-11-26 20:27:55.915697] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:02.485 [2024-11-26 20:27:55.915707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.485 "name": "Existed_Raid", 00:15:02.485 "uuid": "6973bc43-9ffc-43e4-8421-f6f7f009c4fe", 00:15:02.485 "strip_size_kb": 0, 00:15:02.485 "state": "configuring", 00:15:02.485 "raid_level": "raid1", 00:15:02.485 "superblock": true, 00:15:02.485 "num_base_bdevs": 4, 00:15:02.485 "num_base_bdevs_discovered": 0, 00:15:02.485 "num_base_bdevs_operational": 4, 00:15:02.485 "base_bdevs_list": [ 00:15:02.485 { 00:15:02.485 "name": "BaseBdev1", 00:15:02.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.485 "is_configured": false, 00:15:02.485 "data_offset": 0, 00:15:02.485 "data_size": 0 00:15:02.485 }, 00:15:02.485 { 00:15:02.485 "name": "BaseBdev2", 00:15:02.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.485 "is_configured": false, 00:15:02.485 "data_offset": 0, 00:15:02.485 "data_size": 0 00:15:02.485 }, 00:15:02.485 { 00:15:02.485 "name": "BaseBdev3", 00:15:02.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.485 "is_configured": false, 00:15:02.485 "data_offset": 0, 00:15:02.485 "data_size": 0 00:15:02.485 }, 00:15:02.485 { 00:15:02.485 "name": "BaseBdev4", 00:15:02.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.485 "is_configured": false, 00:15:02.485 "data_offset": 0, 00:15:02.485 "data_size": 0 00:15:02.485 } 00:15:02.485 ] 00:15:02.485 }' 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.485 20:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.779 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 [2024-11-26 20:27:56.338791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.039 [2024-11-26 20:27:56.338901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 [2024-11-26 20:27:56.346775] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.039 [2024-11-26 20:27:56.346870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.039 [2024-11-26 20:27:56.346908] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.039 [2024-11-26 20:27:56.346962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.039 [2024-11-26 20:27:56.346995] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.039 [2024-11-26 20:27:56.347047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.039 [2024-11-26 20:27:56.347079] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:03.039 [2024-11-26 20:27:56.347115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 [2024-11-26 20:27:56.391878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.039 BaseBdev1 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.039 [ 00:15:03.039 { 00:15:03.039 "name": "BaseBdev1", 00:15:03.039 "aliases": [ 00:15:03.039 "43876485-e4d6-411a-9221-7ad98a656635" 00:15:03.039 ], 00:15:03.039 "product_name": "Malloc disk", 00:15:03.039 "block_size": 512, 00:15:03.039 "num_blocks": 65536, 00:15:03.039 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:03.039 "assigned_rate_limits": { 00:15:03.039 "rw_ios_per_sec": 0, 00:15:03.039 "rw_mbytes_per_sec": 0, 00:15:03.039 "r_mbytes_per_sec": 0, 00:15:03.039 "w_mbytes_per_sec": 0 00:15:03.039 }, 00:15:03.039 "claimed": true, 00:15:03.039 "claim_type": "exclusive_write", 00:15:03.039 "zoned": false, 00:15:03.039 "supported_io_types": { 00:15:03.039 "read": true, 00:15:03.039 "write": true, 00:15:03.039 "unmap": true, 00:15:03.039 "flush": true, 00:15:03.039 "reset": true, 00:15:03.039 "nvme_admin": false, 00:15:03.039 "nvme_io": false, 00:15:03.039 "nvme_io_md": false, 00:15:03.039 "write_zeroes": true, 00:15:03.039 "zcopy": true, 00:15:03.039 "get_zone_info": false, 00:15:03.039 "zone_management": false, 00:15:03.039 "zone_append": false, 00:15:03.039 "compare": false, 00:15:03.039 "compare_and_write": false, 00:15:03.039 "abort": true, 00:15:03.039 "seek_hole": false, 00:15:03.039 "seek_data": false, 00:15:03.039 "copy": true, 00:15:03.039 "nvme_iov_md": false 00:15:03.039 }, 00:15:03.039 "memory_domains": [ 00:15:03.039 { 00:15:03.039 "dma_device_id": "system", 00:15:03.039 "dma_device_type": 1 00:15:03.039 }, 00:15:03.039 { 00:15:03.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.039 "dma_device_type": 2 00:15:03.039 } 00:15:03.039 ], 00:15:03.039 "driver_specific": {} 00:15:03.039 } 00:15:03.039 ] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.039 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.040 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.040 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.040 "name": "Existed_Raid", 00:15:03.040 "uuid": "a3c56d79-df47-444c-8196-2fb0c42396e1", 00:15:03.040 "strip_size_kb": 0, 00:15:03.040 "state": "configuring", 00:15:03.040 "raid_level": "raid1", 00:15:03.040 "superblock": true, 00:15:03.040 "num_base_bdevs": 4, 00:15:03.040 "num_base_bdevs_discovered": 1, 00:15:03.040 "num_base_bdevs_operational": 4, 00:15:03.040 "base_bdevs_list": [ 00:15:03.040 { 00:15:03.040 "name": "BaseBdev1", 00:15:03.040 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:03.040 "is_configured": true, 00:15:03.040 "data_offset": 2048, 00:15:03.040 "data_size": 63488 00:15:03.040 }, 00:15:03.040 { 00:15:03.040 "name": "BaseBdev2", 00:15:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.040 "is_configured": false, 00:15:03.040 "data_offset": 0, 00:15:03.040 "data_size": 0 00:15:03.040 }, 00:15:03.040 { 00:15:03.040 "name": "BaseBdev3", 00:15:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.040 "is_configured": false, 00:15:03.040 "data_offset": 0, 00:15:03.040 "data_size": 0 00:15:03.040 }, 00:15:03.040 { 00:15:03.040 "name": "BaseBdev4", 00:15:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.040 "is_configured": false, 00:15:03.040 "data_offset": 0, 00:15:03.040 "data_size": 0 00:15:03.040 } 00:15:03.040 ] 00:15:03.040 }' 00:15:03.040 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.040 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 [2024-11-26 20:27:56.899083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.608 [2024-11-26 20:27:56.899143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 [2024-11-26 20:27:56.911118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.608 [2024-11-26 20:27:56.913132] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.608 [2024-11-26 20:27:56.913224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.608 [2024-11-26 20:27:56.913258] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.608 [2024-11-26 20:27:56.913273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.608 [2024-11-26 20:27:56.913281] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:03.608 [2024-11-26 20:27:56.913291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.608 "name": "Existed_Raid", 00:15:03.608 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:03.608 "strip_size_kb": 0, 00:15:03.608 "state": "configuring", 00:15:03.608 "raid_level": "raid1", 00:15:03.608 "superblock": true, 00:15:03.608 "num_base_bdevs": 4, 00:15:03.608 "num_base_bdevs_discovered": 1, 00:15:03.608 "num_base_bdevs_operational": 4, 00:15:03.608 "base_bdevs_list": [ 00:15:03.608 { 00:15:03.608 "name": "BaseBdev1", 00:15:03.608 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:03.608 "is_configured": true, 00:15:03.608 "data_offset": 2048, 00:15:03.608 "data_size": 63488 00:15:03.608 }, 00:15:03.608 { 00:15:03.608 "name": "BaseBdev2", 00:15:03.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.608 "is_configured": false, 00:15:03.608 "data_offset": 0, 00:15:03.608 "data_size": 0 00:15:03.608 }, 00:15:03.608 { 00:15:03.608 "name": "BaseBdev3", 00:15:03.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.608 "is_configured": false, 00:15:03.608 "data_offset": 0, 00:15:03.608 "data_size": 0 00:15:03.608 }, 00:15:03.608 { 00:15:03.608 "name": "BaseBdev4", 00:15:03.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.608 "is_configured": false, 00:15:03.608 "data_offset": 0, 00:15:03.608 "data_size": 0 00:15:03.608 } 00:15:03.608 ] 00:15:03.608 }' 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.608 20:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.868 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:03.868 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.868 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.129 [2024-11-26 20:27:57.451788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.129 BaseBdev2 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.129 [ 00:15:04.129 { 00:15:04.129 "name": "BaseBdev2", 00:15:04.129 "aliases": [ 00:15:04.129 "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2" 00:15:04.129 ], 00:15:04.129 "product_name": "Malloc disk", 00:15:04.129 "block_size": 512, 00:15:04.129 "num_blocks": 65536, 00:15:04.129 "uuid": "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2", 00:15:04.129 "assigned_rate_limits": { 00:15:04.129 "rw_ios_per_sec": 0, 00:15:04.129 "rw_mbytes_per_sec": 0, 00:15:04.129 "r_mbytes_per_sec": 0, 00:15:04.129 "w_mbytes_per_sec": 0 00:15:04.129 }, 00:15:04.129 "claimed": true, 00:15:04.129 "claim_type": "exclusive_write", 00:15:04.129 "zoned": false, 00:15:04.129 "supported_io_types": { 00:15:04.129 "read": true, 00:15:04.129 "write": true, 00:15:04.129 "unmap": true, 00:15:04.129 "flush": true, 00:15:04.129 "reset": true, 00:15:04.129 "nvme_admin": false, 00:15:04.129 "nvme_io": false, 00:15:04.129 "nvme_io_md": false, 00:15:04.129 "write_zeroes": true, 00:15:04.129 "zcopy": true, 00:15:04.129 "get_zone_info": false, 00:15:04.129 "zone_management": false, 00:15:04.129 "zone_append": false, 00:15:04.129 "compare": false, 00:15:04.129 "compare_and_write": false, 00:15:04.129 "abort": true, 00:15:04.129 "seek_hole": false, 00:15:04.129 "seek_data": false, 00:15:04.129 "copy": true, 00:15:04.129 "nvme_iov_md": false 00:15:04.129 }, 00:15:04.129 "memory_domains": [ 00:15:04.129 { 00:15:04.129 "dma_device_id": "system", 00:15:04.129 "dma_device_type": 1 00:15:04.129 }, 00:15:04.129 { 00:15:04.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.129 "dma_device_type": 2 00:15:04.129 } 00:15:04.129 ], 00:15:04.129 "driver_specific": {} 00:15:04.129 } 00:15:04.129 ] 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.129 "name": "Existed_Raid", 00:15:04.129 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:04.129 "strip_size_kb": 0, 00:15:04.129 "state": "configuring", 00:15:04.129 "raid_level": "raid1", 00:15:04.129 "superblock": true, 00:15:04.129 "num_base_bdevs": 4, 00:15:04.129 "num_base_bdevs_discovered": 2, 00:15:04.129 "num_base_bdevs_operational": 4, 00:15:04.129 "base_bdevs_list": [ 00:15:04.129 { 00:15:04.129 "name": "BaseBdev1", 00:15:04.129 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:04.129 "is_configured": true, 00:15:04.129 "data_offset": 2048, 00:15:04.129 "data_size": 63488 00:15:04.129 }, 00:15:04.129 { 00:15:04.129 "name": "BaseBdev2", 00:15:04.129 "uuid": "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2", 00:15:04.129 "is_configured": true, 00:15:04.129 "data_offset": 2048, 00:15:04.129 "data_size": 63488 00:15:04.129 }, 00:15:04.129 { 00:15:04.129 "name": "BaseBdev3", 00:15:04.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.129 "is_configured": false, 00:15:04.129 "data_offset": 0, 00:15:04.129 "data_size": 0 00:15:04.129 }, 00:15:04.129 { 00:15:04.129 "name": "BaseBdev4", 00:15:04.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.129 "is_configured": false, 00:15:04.129 "data_offset": 0, 00:15:04.129 "data_size": 0 00:15:04.129 } 00:15:04.129 ] 00:15:04.129 }' 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.129 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.697 20:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:04.697 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.697 20:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.697 [2024-11-26 20:27:58.024255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.698 BaseBdev3 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.698 [ 00:15:04.698 { 00:15:04.698 "name": "BaseBdev3", 00:15:04.698 "aliases": [ 00:15:04.698 "90b0f672-2de7-404e-a379-6dd6e2d902a0" 00:15:04.698 ], 00:15:04.698 "product_name": "Malloc disk", 00:15:04.698 "block_size": 512, 00:15:04.698 "num_blocks": 65536, 00:15:04.698 "uuid": "90b0f672-2de7-404e-a379-6dd6e2d902a0", 00:15:04.698 "assigned_rate_limits": { 00:15:04.698 "rw_ios_per_sec": 0, 00:15:04.698 "rw_mbytes_per_sec": 0, 00:15:04.698 "r_mbytes_per_sec": 0, 00:15:04.698 "w_mbytes_per_sec": 0 00:15:04.698 }, 00:15:04.698 "claimed": true, 00:15:04.698 "claim_type": "exclusive_write", 00:15:04.698 "zoned": false, 00:15:04.698 "supported_io_types": { 00:15:04.698 "read": true, 00:15:04.698 "write": true, 00:15:04.698 "unmap": true, 00:15:04.698 "flush": true, 00:15:04.698 "reset": true, 00:15:04.698 "nvme_admin": false, 00:15:04.698 "nvme_io": false, 00:15:04.698 "nvme_io_md": false, 00:15:04.698 "write_zeroes": true, 00:15:04.698 "zcopy": true, 00:15:04.698 "get_zone_info": false, 00:15:04.698 "zone_management": false, 00:15:04.698 "zone_append": false, 00:15:04.698 "compare": false, 00:15:04.698 "compare_and_write": false, 00:15:04.698 "abort": true, 00:15:04.698 "seek_hole": false, 00:15:04.698 "seek_data": false, 00:15:04.698 "copy": true, 00:15:04.698 "nvme_iov_md": false 00:15:04.698 }, 00:15:04.698 "memory_domains": [ 00:15:04.698 { 00:15:04.698 "dma_device_id": "system", 00:15:04.698 "dma_device_type": 1 00:15:04.698 }, 00:15:04.698 { 00:15:04.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.698 "dma_device_type": 2 00:15:04.698 } 00:15:04.698 ], 00:15:04.698 "driver_specific": {} 00:15:04.698 } 00:15:04.698 ] 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.698 "name": "Existed_Raid", 00:15:04.698 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:04.698 "strip_size_kb": 0, 00:15:04.698 "state": "configuring", 00:15:04.698 "raid_level": "raid1", 00:15:04.698 "superblock": true, 00:15:04.698 "num_base_bdevs": 4, 00:15:04.698 "num_base_bdevs_discovered": 3, 00:15:04.698 "num_base_bdevs_operational": 4, 00:15:04.698 "base_bdevs_list": [ 00:15:04.698 { 00:15:04.698 "name": "BaseBdev1", 00:15:04.698 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:04.698 "is_configured": true, 00:15:04.698 "data_offset": 2048, 00:15:04.698 "data_size": 63488 00:15:04.698 }, 00:15:04.698 { 00:15:04.698 "name": "BaseBdev2", 00:15:04.698 "uuid": "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2", 00:15:04.698 "is_configured": true, 00:15:04.698 "data_offset": 2048, 00:15:04.698 "data_size": 63488 00:15:04.698 }, 00:15:04.698 { 00:15:04.698 "name": "BaseBdev3", 00:15:04.698 "uuid": "90b0f672-2de7-404e-a379-6dd6e2d902a0", 00:15:04.698 "is_configured": true, 00:15:04.698 "data_offset": 2048, 00:15:04.698 "data_size": 63488 00:15:04.698 }, 00:15:04.698 { 00:15:04.698 "name": "BaseBdev4", 00:15:04.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.698 "is_configured": false, 00:15:04.698 "data_offset": 0, 00:15:04.698 "data_size": 0 00:15:04.698 } 00:15:04.698 ] 00:15:04.698 }' 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.698 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.959 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:04.959 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.959 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 [2024-11-26 20:27:58.531759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.218 [2024-11-26 20:27:58.532175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:05.218 [2024-11-26 20:27:58.532234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:05.218 [2024-11-26 20:27:58.532576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:05.218 [2024-11-26 20:27:58.532812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:05.218 [2024-11-26 20:27:58.532866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:05.218 BaseBdev4 00:15:05.218 [2024-11-26 20:27:58.533078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 [ 00:15:05.218 { 00:15:05.218 "name": "BaseBdev4", 00:15:05.218 "aliases": [ 00:15:05.218 "384a53f0-b79d-4655-baf6-989832074cf9" 00:15:05.218 ], 00:15:05.218 "product_name": "Malloc disk", 00:15:05.218 "block_size": 512, 00:15:05.218 "num_blocks": 65536, 00:15:05.218 "uuid": "384a53f0-b79d-4655-baf6-989832074cf9", 00:15:05.218 "assigned_rate_limits": { 00:15:05.218 "rw_ios_per_sec": 0, 00:15:05.218 "rw_mbytes_per_sec": 0, 00:15:05.218 "r_mbytes_per_sec": 0, 00:15:05.218 "w_mbytes_per_sec": 0 00:15:05.218 }, 00:15:05.218 "claimed": true, 00:15:05.218 "claim_type": "exclusive_write", 00:15:05.218 "zoned": false, 00:15:05.218 "supported_io_types": { 00:15:05.218 "read": true, 00:15:05.218 "write": true, 00:15:05.218 "unmap": true, 00:15:05.218 "flush": true, 00:15:05.218 "reset": true, 00:15:05.218 "nvme_admin": false, 00:15:05.218 "nvme_io": false, 00:15:05.218 "nvme_io_md": false, 00:15:05.218 "write_zeroes": true, 00:15:05.218 "zcopy": true, 00:15:05.218 "get_zone_info": false, 00:15:05.218 "zone_management": false, 00:15:05.218 "zone_append": false, 00:15:05.218 "compare": false, 00:15:05.218 "compare_and_write": false, 00:15:05.218 "abort": true, 00:15:05.218 "seek_hole": false, 00:15:05.218 "seek_data": false, 00:15:05.218 "copy": true, 00:15:05.218 "nvme_iov_md": false 00:15:05.218 }, 00:15:05.218 "memory_domains": [ 00:15:05.218 { 00:15:05.218 "dma_device_id": "system", 00:15:05.218 "dma_device_type": 1 00:15:05.218 }, 00:15:05.218 { 00:15:05.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.218 "dma_device_type": 2 00:15:05.218 } 00:15:05.218 ], 00:15:05.218 "driver_specific": {} 00:15:05.218 } 00:15:05.218 ] 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.218 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.219 "name": "Existed_Raid", 00:15:05.219 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:05.219 "strip_size_kb": 0, 00:15:05.219 "state": "online", 00:15:05.219 "raid_level": "raid1", 00:15:05.219 "superblock": true, 00:15:05.219 "num_base_bdevs": 4, 00:15:05.219 "num_base_bdevs_discovered": 4, 00:15:05.219 "num_base_bdevs_operational": 4, 00:15:05.219 "base_bdevs_list": [ 00:15:05.219 { 00:15:05.219 "name": "BaseBdev1", 00:15:05.219 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:05.219 "is_configured": true, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 }, 00:15:05.219 { 00:15:05.219 "name": "BaseBdev2", 00:15:05.219 "uuid": "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2", 00:15:05.219 "is_configured": true, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 }, 00:15:05.219 { 00:15:05.219 "name": "BaseBdev3", 00:15:05.219 "uuid": "90b0f672-2de7-404e-a379-6dd6e2d902a0", 00:15:05.219 "is_configured": true, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 }, 00:15:05.219 { 00:15:05.219 "name": "BaseBdev4", 00:15:05.219 "uuid": "384a53f0-b79d-4655-baf6-989832074cf9", 00:15:05.219 "is_configured": true, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 } 00:15:05.219 ] 00:15:05.219 }' 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.219 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.478 20:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.478 [2024-11-26 20:27:59.003501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.478 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.738 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:05.738 "name": "Existed_Raid", 00:15:05.738 "aliases": [ 00:15:05.738 "f8899d1c-cd05-46a2-b5d0-214e5663e483" 00:15:05.738 ], 00:15:05.738 "product_name": "Raid Volume", 00:15:05.738 "block_size": 512, 00:15:05.738 "num_blocks": 63488, 00:15:05.738 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:05.738 "assigned_rate_limits": { 00:15:05.738 "rw_ios_per_sec": 0, 00:15:05.738 "rw_mbytes_per_sec": 0, 00:15:05.738 "r_mbytes_per_sec": 0, 00:15:05.738 "w_mbytes_per_sec": 0 00:15:05.738 }, 00:15:05.738 "claimed": false, 00:15:05.738 "zoned": false, 00:15:05.738 "supported_io_types": { 00:15:05.738 "read": true, 00:15:05.738 "write": true, 00:15:05.739 "unmap": false, 00:15:05.739 "flush": false, 00:15:05.739 "reset": true, 00:15:05.739 "nvme_admin": false, 00:15:05.739 "nvme_io": false, 00:15:05.739 "nvme_io_md": false, 00:15:05.739 "write_zeroes": true, 00:15:05.739 "zcopy": false, 00:15:05.739 "get_zone_info": false, 00:15:05.739 "zone_management": false, 00:15:05.739 "zone_append": false, 00:15:05.739 "compare": false, 00:15:05.739 "compare_and_write": false, 00:15:05.739 "abort": false, 00:15:05.739 "seek_hole": false, 00:15:05.739 "seek_data": false, 00:15:05.739 "copy": false, 00:15:05.739 "nvme_iov_md": false 00:15:05.739 }, 00:15:05.739 "memory_domains": [ 00:15:05.739 { 00:15:05.739 "dma_device_id": "system", 00:15:05.739 "dma_device_type": 1 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.739 "dma_device_type": 2 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "system", 00:15:05.739 "dma_device_type": 1 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.739 "dma_device_type": 2 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "system", 00:15:05.739 "dma_device_type": 1 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.739 "dma_device_type": 2 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "system", 00:15:05.739 "dma_device_type": 1 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.739 "dma_device_type": 2 00:15:05.739 } 00:15:05.739 ], 00:15:05.739 "driver_specific": { 00:15:05.739 "raid": { 00:15:05.739 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:05.739 "strip_size_kb": 0, 00:15:05.739 "state": "online", 00:15:05.739 "raid_level": "raid1", 00:15:05.739 "superblock": true, 00:15:05.739 "num_base_bdevs": 4, 00:15:05.739 "num_base_bdevs_discovered": 4, 00:15:05.739 "num_base_bdevs_operational": 4, 00:15:05.739 "base_bdevs_list": [ 00:15:05.739 { 00:15:05.739 "name": "BaseBdev1", 00:15:05.739 "uuid": "43876485-e4d6-411a-9221-7ad98a656635", 00:15:05.739 "is_configured": true, 00:15:05.739 "data_offset": 2048, 00:15:05.739 "data_size": 63488 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "name": "BaseBdev2", 00:15:05.739 "uuid": "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2", 00:15:05.739 "is_configured": true, 00:15:05.739 "data_offset": 2048, 00:15:05.739 "data_size": 63488 00:15:05.739 }, 00:15:05.739 { 00:15:05.739 "name": "BaseBdev3", 00:15:05.739 "uuid": "90b0f672-2de7-404e-a379-6dd6e2d902a0", 00:15:05.739 "is_configured": true, 00:15:05.739 "data_offset": 2048, 00:15:05.739 "data_size": 63488 00:15:05.739 }, 00:15:05.739 { 00:15:05.740 "name": "BaseBdev4", 00:15:05.740 "uuid": "384a53f0-b79d-4655-baf6-989832074cf9", 00:15:05.740 "is_configured": true, 00:15:05.740 "data_offset": 2048, 00:15:05.740 "data_size": 63488 00:15:05.740 } 00:15:05.740 ] 00:15:05.740 } 00:15:05.740 } 00:15:05.740 }' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:05.740 BaseBdev2 00:15:05.740 BaseBdev3 00:15:05.740 BaseBdev4' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.740 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.000 [2024-11-26 20:27:59.350520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.000 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.000 "name": "Existed_Raid", 00:15:06.000 "uuid": "f8899d1c-cd05-46a2-b5d0-214e5663e483", 00:15:06.000 "strip_size_kb": 0, 00:15:06.000 "state": "online", 00:15:06.000 "raid_level": "raid1", 00:15:06.001 "superblock": true, 00:15:06.001 "num_base_bdevs": 4, 00:15:06.001 "num_base_bdevs_discovered": 3, 00:15:06.001 "num_base_bdevs_operational": 3, 00:15:06.001 "base_bdevs_list": [ 00:15:06.001 { 00:15:06.001 "name": null, 00:15:06.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.001 "is_configured": false, 00:15:06.001 "data_offset": 0, 00:15:06.001 "data_size": 63488 00:15:06.001 }, 00:15:06.001 { 00:15:06.001 "name": "BaseBdev2", 00:15:06.001 "uuid": "e80f8fee-c902-4e1c-b59d-0b059f3ae5c2", 00:15:06.001 "is_configured": true, 00:15:06.001 "data_offset": 2048, 00:15:06.001 "data_size": 63488 00:15:06.001 }, 00:15:06.001 { 00:15:06.001 "name": "BaseBdev3", 00:15:06.001 "uuid": "90b0f672-2de7-404e-a379-6dd6e2d902a0", 00:15:06.001 "is_configured": true, 00:15:06.001 "data_offset": 2048, 00:15:06.001 "data_size": 63488 00:15:06.001 }, 00:15:06.001 { 00:15:06.001 "name": "BaseBdev4", 00:15:06.001 "uuid": "384a53f0-b79d-4655-baf6-989832074cf9", 00:15:06.001 "is_configured": true, 00:15:06.001 "data_offset": 2048, 00:15:06.001 "data_size": 63488 00:15:06.001 } 00:15:06.001 ] 00:15:06.001 }' 00:15:06.001 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.001 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.569 20:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.569 [2024-11-26 20:27:59.995062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.569 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.828 [2024-11-26 20:28:00.150348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.828 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.828 [2024-11-26 20:28:00.306528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:06.828 [2024-11-26 20:28:00.306704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.087 [2024-11-26 20:28:00.412643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.087 [2024-11-26 20:28:00.412815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.087 [2024-11-26 20:28:00.412872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.087 BaseBdev2 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.087 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 [ 00:15:07.088 { 00:15:07.088 "name": "BaseBdev2", 00:15:07.088 "aliases": [ 00:15:07.088 "77e628ba-b3da-456c-876f-bb1db0b3657b" 00:15:07.088 ], 00:15:07.088 "product_name": "Malloc disk", 00:15:07.088 "block_size": 512, 00:15:07.088 "num_blocks": 65536, 00:15:07.088 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:07.088 "assigned_rate_limits": { 00:15:07.088 "rw_ios_per_sec": 0, 00:15:07.088 "rw_mbytes_per_sec": 0, 00:15:07.088 "r_mbytes_per_sec": 0, 00:15:07.088 "w_mbytes_per_sec": 0 00:15:07.088 }, 00:15:07.088 "claimed": false, 00:15:07.088 "zoned": false, 00:15:07.088 "supported_io_types": { 00:15:07.088 "read": true, 00:15:07.088 "write": true, 00:15:07.088 "unmap": true, 00:15:07.088 "flush": true, 00:15:07.088 "reset": true, 00:15:07.088 "nvme_admin": false, 00:15:07.088 "nvme_io": false, 00:15:07.088 "nvme_io_md": false, 00:15:07.088 "write_zeroes": true, 00:15:07.088 "zcopy": true, 00:15:07.088 "get_zone_info": false, 00:15:07.088 "zone_management": false, 00:15:07.088 "zone_append": false, 00:15:07.088 "compare": false, 00:15:07.088 "compare_and_write": false, 00:15:07.088 "abort": true, 00:15:07.088 "seek_hole": false, 00:15:07.088 "seek_data": false, 00:15:07.088 "copy": true, 00:15:07.088 "nvme_iov_md": false 00:15:07.088 }, 00:15:07.088 "memory_domains": [ 00:15:07.088 { 00:15:07.088 "dma_device_id": "system", 00:15:07.088 "dma_device_type": 1 00:15:07.088 }, 00:15:07.088 { 00:15:07.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.088 "dma_device_type": 2 00:15:07.088 } 00:15:07.088 ], 00:15:07.088 "driver_specific": {} 00:15:07.088 } 00:15:07.088 ] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 BaseBdev3 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.088 [ 00:15:07.088 { 00:15:07.088 "name": "BaseBdev3", 00:15:07.088 "aliases": [ 00:15:07.088 "9fba8929-2dfa-4445-9d3d-85ac30813bd2" 00:15:07.088 ], 00:15:07.088 "product_name": "Malloc disk", 00:15:07.088 "block_size": 512, 00:15:07.088 "num_blocks": 65536, 00:15:07.088 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:07.088 "assigned_rate_limits": { 00:15:07.088 "rw_ios_per_sec": 0, 00:15:07.088 "rw_mbytes_per_sec": 0, 00:15:07.088 "r_mbytes_per_sec": 0, 00:15:07.088 "w_mbytes_per_sec": 0 00:15:07.088 }, 00:15:07.088 "claimed": false, 00:15:07.088 "zoned": false, 00:15:07.088 "supported_io_types": { 00:15:07.088 "read": true, 00:15:07.088 "write": true, 00:15:07.088 "unmap": true, 00:15:07.088 "flush": true, 00:15:07.088 "reset": true, 00:15:07.088 "nvme_admin": false, 00:15:07.088 "nvme_io": false, 00:15:07.088 "nvme_io_md": false, 00:15:07.088 "write_zeroes": true, 00:15:07.088 "zcopy": true, 00:15:07.088 "get_zone_info": false, 00:15:07.088 "zone_management": false, 00:15:07.088 "zone_append": false, 00:15:07.088 "compare": false, 00:15:07.088 "compare_and_write": false, 00:15:07.088 "abort": true, 00:15:07.088 "seek_hole": false, 00:15:07.088 "seek_data": false, 00:15:07.088 "copy": true, 00:15:07.088 "nvme_iov_md": false 00:15:07.088 }, 00:15:07.088 "memory_domains": [ 00:15:07.088 { 00:15:07.088 "dma_device_id": "system", 00:15:07.088 "dma_device_type": 1 00:15:07.088 }, 00:15:07.088 { 00:15:07.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.088 "dma_device_type": 2 00:15:07.088 } 00:15:07.088 ], 00:15:07.088 "driver_specific": {} 00:15:07.088 } 00:15:07.088 ] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.088 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.348 BaseBdev4 00:15:07.348 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 [ 00:15:07.349 { 00:15:07.349 "name": "BaseBdev4", 00:15:07.349 "aliases": [ 00:15:07.349 "36049040-6316-4c15-a367-fab8703ae9f3" 00:15:07.349 ], 00:15:07.349 "product_name": "Malloc disk", 00:15:07.349 "block_size": 512, 00:15:07.349 "num_blocks": 65536, 00:15:07.349 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:07.349 "assigned_rate_limits": { 00:15:07.349 "rw_ios_per_sec": 0, 00:15:07.349 "rw_mbytes_per_sec": 0, 00:15:07.349 "r_mbytes_per_sec": 0, 00:15:07.349 "w_mbytes_per_sec": 0 00:15:07.349 }, 00:15:07.349 "claimed": false, 00:15:07.349 "zoned": false, 00:15:07.349 "supported_io_types": { 00:15:07.349 "read": true, 00:15:07.349 "write": true, 00:15:07.349 "unmap": true, 00:15:07.349 "flush": true, 00:15:07.349 "reset": true, 00:15:07.349 "nvme_admin": false, 00:15:07.349 "nvme_io": false, 00:15:07.349 "nvme_io_md": false, 00:15:07.349 "write_zeroes": true, 00:15:07.349 "zcopy": true, 00:15:07.349 "get_zone_info": false, 00:15:07.349 "zone_management": false, 00:15:07.349 "zone_append": false, 00:15:07.349 "compare": false, 00:15:07.349 "compare_and_write": false, 00:15:07.349 "abort": true, 00:15:07.349 "seek_hole": false, 00:15:07.349 "seek_data": false, 00:15:07.349 "copy": true, 00:15:07.349 "nvme_iov_md": false 00:15:07.349 }, 00:15:07.349 "memory_domains": [ 00:15:07.349 { 00:15:07.349 "dma_device_id": "system", 00:15:07.349 "dma_device_type": 1 00:15:07.349 }, 00:15:07.349 { 00:15:07.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.349 "dma_device_type": 2 00:15:07.349 } 00:15:07.349 ], 00:15:07.349 "driver_specific": {} 00:15:07.349 } 00:15:07.349 ] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 [2024-11-26 20:28:00.724906] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.349 [2024-11-26 20:28:00.725023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.349 [2024-11-26 20:28:00.725084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.349 [2024-11-26 20:28:00.727213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.349 [2024-11-26 20:28:00.727327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.349 "name": "Existed_Raid", 00:15:07.349 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:07.349 "strip_size_kb": 0, 00:15:07.349 "state": "configuring", 00:15:07.349 "raid_level": "raid1", 00:15:07.349 "superblock": true, 00:15:07.349 "num_base_bdevs": 4, 00:15:07.349 "num_base_bdevs_discovered": 3, 00:15:07.349 "num_base_bdevs_operational": 4, 00:15:07.349 "base_bdevs_list": [ 00:15:07.349 { 00:15:07.349 "name": "BaseBdev1", 00:15:07.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.349 "is_configured": false, 00:15:07.349 "data_offset": 0, 00:15:07.349 "data_size": 0 00:15:07.349 }, 00:15:07.349 { 00:15:07.349 "name": "BaseBdev2", 00:15:07.349 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:07.349 "is_configured": true, 00:15:07.349 "data_offset": 2048, 00:15:07.349 "data_size": 63488 00:15:07.349 }, 00:15:07.349 { 00:15:07.349 "name": "BaseBdev3", 00:15:07.349 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:07.349 "is_configured": true, 00:15:07.349 "data_offset": 2048, 00:15:07.349 "data_size": 63488 00:15:07.349 }, 00:15:07.349 { 00:15:07.349 "name": "BaseBdev4", 00:15:07.349 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:07.349 "is_configured": true, 00:15:07.349 "data_offset": 2048, 00:15:07.349 "data_size": 63488 00:15:07.349 } 00:15:07.349 ] 00:15:07.349 }' 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.349 20:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.916 [2024-11-26 20:28:01.184168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.916 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.917 "name": "Existed_Raid", 00:15:07.917 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:07.917 "strip_size_kb": 0, 00:15:07.917 "state": "configuring", 00:15:07.917 "raid_level": "raid1", 00:15:07.917 "superblock": true, 00:15:07.917 "num_base_bdevs": 4, 00:15:07.917 "num_base_bdevs_discovered": 2, 00:15:07.917 "num_base_bdevs_operational": 4, 00:15:07.917 "base_bdevs_list": [ 00:15:07.917 { 00:15:07.917 "name": "BaseBdev1", 00:15:07.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.917 "is_configured": false, 00:15:07.917 "data_offset": 0, 00:15:07.917 "data_size": 0 00:15:07.917 }, 00:15:07.917 { 00:15:07.917 "name": null, 00:15:07.917 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:07.917 "is_configured": false, 00:15:07.917 "data_offset": 0, 00:15:07.917 "data_size": 63488 00:15:07.917 }, 00:15:07.917 { 00:15:07.917 "name": "BaseBdev3", 00:15:07.917 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:07.917 "is_configured": true, 00:15:07.917 "data_offset": 2048, 00:15:07.917 "data_size": 63488 00:15:07.917 }, 00:15:07.917 { 00:15:07.917 "name": "BaseBdev4", 00:15:07.917 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:07.917 "is_configured": true, 00:15:07.917 "data_offset": 2048, 00:15:07.917 "data_size": 63488 00:15:07.917 } 00:15:07.917 ] 00:15:07.917 }' 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.917 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.205 [2024-11-26 20:28:01.706634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.205 BaseBdev1 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.205 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 [ 00:15:08.464 { 00:15:08.464 "name": "BaseBdev1", 00:15:08.464 "aliases": [ 00:15:08.464 "bf298752-26c7-4574-a90f-0180f3948002" 00:15:08.464 ], 00:15:08.464 "product_name": "Malloc disk", 00:15:08.464 "block_size": 512, 00:15:08.464 "num_blocks": 65536, 00:15:08.464 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:08.464 "assigned_rate_limits": { 00:15:08.464 "rw_ios_per_sec": 0, 00:15:08.464 "rw_mbytes_per_sec": 0, 00:15:08.464 "r_mbytes_per_sec": 0, 00:15:08.464 "w_mbytes_per_sec": 0 00:15:08.464 }, 00:15:08.464 "claimed": true, 00:15:08.464 "claim_type": "exclusive_write", 00:15:08.464 "zoned": false, 00:15:08.464 "supported_io_types": { 00:15:08.464 "read": true, 00:15:08.464 "write": true, 00:15:08.464 "unmap": true, 00:15:08.464 "flush": true, 00:15:08.464 "reset": true, 00:15:08.464 "nvme_admin": false, 00:15:08.464 "nvme_io": false, 00:15:08.464 "nvme_io_md": false, 00:15:08.464 "write_zeroes": true, 00:15:08.464 "zcopy": true, 00:15:08.464 "get_zone_info": false, 00:15:08.464 "zone_management": false, 00:15:08.464 "zone_append": false, 00:15:08.464 "compare": false, 00:15:08.464 "compare_and_write": false, 00:15:08.464 "abort": true, 00:15:08.464 "seek_hole": false, 00:15:08.464 "seek_data": false, 00:15:08.464 "copy": true, 00:15:08.464 "nvme_iov_md": false 00:15:08.464 }, 00:15:08.464 "memory_domains": [ 00:15:08.464 { 00:15:08.464 "dma_device_id": "system", 00:15:08.464 "dma_device_type": 1 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.464 "dma_device_type": 2 00:15:08.464 } 00:15:08.464 ], 00:15:08.464 "driver_specific": {} 00:15:08.464 } 00:15:08.464 ] 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.464 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.464 "name": "Existed_Raid", 00:15:08.464 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:08.464 "strip_size_kb": 0, 00:15:08.464 "state": "configuring", 00:15:08.464 "raid_level": "raid1", 00:15:08.464 "superblock": true, 00:15:08.465 "num_base_bdevs": 4, 00:15:08.465 "num_base_bdevs_discovered": 3, 00:15:08.465 "num_base_bdevs_operational": 4, 00:15:08.465 "base_bdevs_list": [ 00:15:08.465 { 00:15:08.465 "name": "BaseBdev1", 00:15:08.465 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:08.465 "is_configured": true, 00:15:08.465 "data_offset": 2048, 00:15:08.465 "data_size": 63488 00:15:08.465 }, 00:15:08.465 { 00:15:08.465 "name": null, 00:15:08.465 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:08.465 "is_configured": false, 00:15:08.465 "data_offset": 0, 00:15:08.465 "data_size": 63488 00:15:08.465 }, 00:15:08.465 { 00:15:08.465 "name": "BaseBdev3", 00:15:08.465 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:08.465 "is_configured": true, 00:15:08.465 "data_offset": 2048, 00:15:08.465 "data_size": 63488 00:15:08.465 }, 00:15:08.465 { 00:15:08.465 "name": "BaseBdev4", 00:15:08.465 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:08.465 "is_configured": true, 00:15:08.465 "data_offset": 2048, 00:15:08.465 "data_size": 63488 00:15:08.465 } 00:15:08.465 ] 00:15:08.465 }' 00:15:08.465 20:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.465 20:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.724 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:08.724 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.982 [2024-11-26 20:28:02.301718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.982 "name": "Existed_Raid", 00:15:08.982 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:08.982 "strip_size_kb": 0, 00:15:08.982 "state": "configuring", 00:15:08.982 "raid_level": "raid1", 00:15:08.982 "superblock": true, 00:15:08.982 "num_base_bdevs": 4, 00:15:08.982 "num_base_bdevs_discovered": 2, 00:15:08.982 "num_base_bdevs_operational": 4, 00:15:08.982 "base_bdevs_list": [ 00:15:08.982 { 00:15:08.982 "name": "BaseBdev1", 00:15:08.982 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:08.982 "is_configured": true, 00:15:08.982 "data_offset": 2048, 00:15:08.982 "data_size": 63488 00:15:08.982 }, 00:15:08.982 { 00:15:08.982 "name": null, 00:15:08.982 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:08.982 "is_configured": false, 00:15:08.982 "data_offset": 0, 00:15:08.982 "data_size": 63488 00:15:08.982 }, 00:15:08.982 { 00:15:08.982 "name": null, 00:15:08.982 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:08.982 "is_configured": false, 00:15:08.982 "data_offset": 0, 00:15:08.982 "data_size": 63488 00:15:08.982 }, 00:15:08.982 { 00:15:08.982 "name": "BaseBdev4", 00:15:08.982 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:08.982 "is_configured": true, 00:15:08.982 "data_offset": 2048, 00:15:08.982 "data_size": 63488 00:15:08.982 } 00:15:08.982 ] 00:15:08.982 }' 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.982 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.551 [2024-11-26 20:28:02.876753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.551 "name": "Existed_Raid", 00:15:09.551 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:09.551 "strip_size_kb": 0, 00:15:09.551 "state": "configuring", 00:15:09.551 "raid_level": "raid1", 00:15:09.551 "superblock": true, 00:15:09.551 "num_base_bdevs": 4, 00:15:09.551 "num_base_bdevs_discovered": 3, 00:15:09.551 "num_base_bdevs_operational": 4, 00:15:09.551 "base_bdevs_list": [ 00:15:09.551 { 00:15:09.551 "name": "BaseBdev1", 00:15:09.551 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:09.551 "is_configured": true, 00:15:09.551 "data_offset": 2048, 00:15:09.551 "data_size": 63488 00:15:09.551 }, 00:15:09.551 { 00:15:09.551 "name": null, 00:15:09.551 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:09.551 "is_configured": false, 00:15:09.551 "data_offset": 0, 00:15:09.551 "data_size": 63488 00:15:09.551 }, 00:15:09.551 { 00:15:09.551 "name": "BaseBdev3", 00:15:09.551 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:09.551 "is_configured": true, 00:15:09.551 "data_offset": 2048, 00:15:09.551 "data_size": 63488 00:15:09.551 }, 00:15:09.551 { 00:15:09.551 "name": "BaseBdev4", 00:15:09.551 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:09.551 "is_configured": true, 00:15:09.551 "data_offset": 2048, 00:15:09.551 "data_size": 63488 00:15:09.551 } 00:15:09.551 ] 00:15:09.551 }' 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.551 20:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.119 [2024-11-26 20:28:03.435841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.119 "name": "Existed_Raid", 00:15:10.119 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:10.119 "strip_size_kb": 0, 00:15:10.119 "state": "configuring", 00:15:10.119 "raid_level": "raid1", 00:15:10.119 "superblock": true, 00:15:10.119 "num_base_bdevs": 4, 00:15:10.119 "num_base_bdevs_discovered": 2, 00:15:10.119 "num_base_bdevs_operational": 4, 00:15:10.119 "base_bdevs_list": [ 00:15:10.119 { 00:15:10.119 "name": null, 00:15:10.119 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:10.119 "is_configured": false, 00:15:10.119 "data_offset": 0, 00:15:10.119 "data_size": 63488 00:15:10.119 }, 00:15:10.119 { 00:15:10.119 "name": null, 00:15:10.119 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:10.119 "is_configured": false, 00:15:10.119 "data_offset": 0, 00:15:10.119 "data_size": 63488 00:15:10.119 }, 00:15:10.119 { 00:15:10.119 "name": "BaseBdev3", 00:15:10.119 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:10.119 "is_configured": true, 00:15:10.119 "data_offset": 2048, 00:15:10.119 "data_size": 63488 00:15:10.119 }, 00:15:10.119 { 00:15:10.119 "name": "BaseBdev4", 00:15:10.119 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:10.119 "is_configured": true, 00:15:10.119 "data_offset": 2048, 00:15:10.119 "data_size": 63488 00:15:10.119 } 00:15:10.119 ] 00:15:10.119 }' 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.119 20:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.737 [2024-11-26 20:28:04.047899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.737 "name": "Existed_Raid", 00:15:10.737 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:10.737 "strip_size_kb": 0, 00:15:10.737 "state": "configuring", 00:15:10.737 "raid_level": "raid1", 00:15:10.737 "superblock": true, 00:15:10.737 "num_base_bdevs": 4, 00:15:10.737 "num_base_bdevs_discovered": 3, 00:15:10.737 "num_base_bdevs_operational": 4, 00:15:10.737 "base_bdevs_list": [ 00:15:10.737 { 00:15:10.737 "name": null, 00:15:10.737 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:10.737 "is_configured": false, 00:15:10.737 "data_offset": 0, 00:15:10.737 "data_size": 63488 00:15:10.737 }, 00:15:10.737 { 00:15:10.737 "name": "BaseBdev2", 00:15:10.737 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:10.737 "is_configured": true, 00:15:10.737 "data_offset": 2048, 00:15:10.737 "data_size": 63488 00:15:10.737 }, 00:15:10.737 { 00:15:10.737 "name": "BaseBdev3", 00:15:10.737 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:10.737 "is_configured": true, 00:15:10.737 "data_offset": 2048, 00:15:10.737 "data_size": 63488 00:15:10.737 }, 00:15:10.737 { 00:15:10.737 "name": "BaseBdev4", 00:15:10.737 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:10.737 "is_configured": true, 00:15:10.737 "data_offset": 2048, 00:15:10.737 "data_size": 63488 00:15:10.737 } 00:15:10.737 ] 00:15:10.737 }' 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.737 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.996 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf298752-26c7-4574-a90f-0180f3948002 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 [2024-11-26 20:28:04.614411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:11.254 [2024-11-26 20:28:04.614683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:11.254 [2024-11-26 20:28:04.614702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:11.254 [2024-11-26 20:28:04.614988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:11.254 [2024-11-26 20:28:04.615158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:11.254 [2024-11-26 20:28:04.615168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:11.254 [2024-11-26 20:28:04.615330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.254 NewBaseBdev 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 [ 00:15:11.254 { 00:15:11.254 "name": "NewBaseBdev", 00:15:11.254 "aliases": [ 00:15:11.254 "bf298752-26c7-4574-a90f-0180f3948002" 00:15:11.254 ], 00:15:11.254 "product_name": "Malloc disk", 00:15:11.254 "block_size": 512, 00:15:11.254 "num_blocks": 65536, 00:15:11.254 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:11.254 "assigned_rate_limits": { 00:15:11.254 "rw_ios_per_sec": 0, 00:15:11.254 "rw_mbytes_per_sec": 0, 00:15:11.254 "r_mbytes_per_sec": 0, 00:15:11.254 "w_mbytes_per_sec": 0 00:15:11.254 }, 00:15:11.254 "claimed": true, 00:15:11.254 "claim_type": "exclusive_write", 00:15:11.254 "zoned": false, 00:15:11.254 "supported_io_types": { 00:15:11.254 "read": true, 00:15:11.254 "write": true, 00:15:11.255 "unmap": true, 00:15:11.255 "flush": true, 00:15:11.255 "reset": true, 00:15:11.255 "nvme_admin": false, 00:15:11.255 "nvme_io": false, 00:15:11.255 "nvme_io_md": false, 00:15:11.255 "write_zeroes": true, 00:15:11.255 "zcopy": true, 00:15:11.255 "get_zone_info": false, 00:15:11.255 "zone_management": false, 00:15:11.255 "zone_append": false, 00:15:11.255 "compare": false, 00:15:11.255 "compare_and_write": false, 00:15:11.255 "abort": true, 00:15:11.255 "seek_hole": false, 00:15:11.255 "seek_data": false, 00:15:11.255 "copy": true, 00:15:11.255 "nvme_iov_md": false 00:15:11.255 }, 00:15:11.255 "memory_domains": [ 00:15:11.255 { 00:15:11.255 "dma_device_id": "system", 00:15:11.255 "dma_device_type": 1 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.255 "dma_device_type": 2 00:15:11.255 } 00:15:11.255 ], 00:15:11.255 "driver_specific": {} 00:15:11.255 } 00:15:11.255 ] 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.255 "name": "Existed_Raid", 00:15:11.255 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:11.255 "strip_size_kb": 0, 00:15:11.255 "state": "online", 00:15:11.255 "raid_level": "raid1", 00:15:11.255 "superblock": true, 00:15:11.255 "num_base_bdevs": 4, 00:15:11.255 "num_base_bdevs_discovered": 4, 00:15:11.255 "num_base_bdevs_operational": 4, 00:15:11.255 "base_bdevs_list": [ 00:15:11.255 { 00:15:11.255 "name": "NewBaseBdev", 00:15:11.255 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "name": "BaseBdev2", 00:15:11.255 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "name": "BaseBdev3", 00:15:11.255 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "name": "BaseBdev4", 00:15:11.255 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 } 00:15:11.255 ] 00:15:11.255 }' 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.255 20:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.822 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.822 [2024-11-26 20:28:05.126038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.823 "name": "Existed_Raid", 00:15:11.823 "aliases": [ 00:15:11.823 "633a0963-9452-4d49-ace5-dbf4861bb4d8" 00:15:11.823 ], 00:15:11.823 "product_name": "Raid Volume", 00:15:11.823 "block_size": 512, 00:15:11.823 "num_blocks": 63488, 00:15:11.823 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:11.823 "assigned_rate_limits": { 00:15:11.823 "rw_ios_per_sec": 0, 00:15:11.823 "rw_mbytes_per_sec": 0, 00:15:11.823 "r_mbytes_per_sec": 0, 00:15:11.823 "w_mbytes_per_sec": 0 00:15:11.823 }, 00:15:11.823 "claimed": false, 00:15:11.823 "zoned": false, 00:15:11.823 "supported_io_types": { 00:15:11.823 "read": true, 00:15:11.823 "write": true, 00:15:11.823 "unmap": false, 00:15:11.823 "flush": false, 00:15:11.823 "reset": true, 00:15:11.823 "nvme_admin": false, 00:15:11.823 "nvme_io": false, 00:15:11.823 "nvme_io_md": false, 00:15:11.823 "write_zeroes": true, 00:15:11.823 "zcopy": false, 00:15:11.823 "get_zone_info": false, 00:15:11.823 "zone_management": false, 00:15:11.823 "zone_append": false, 00:15:11.823 "compare": false, 00:15:11.823 "compare_and_write": false, 00:15:11.823 "abort": false, 00:15:11.823 "seek_hole": false, 00:15:11.823 "seek_data": false, 00:15:11.823 "copy": false, 00:15:11.823 "nvme_iov_md": false 00:15:11.823 }, 00:15:11.823 "memory_domains": [ 00:15:11.823 { 00:15:11.823 "dma_device_id": "system", 00:15:11.823 "dma_device_type": 1 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.823 "dma_device_type": 2 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "system", 00:15:11.823 "dma_device_type": 1 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.823 "dma_device_type": 2 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "system", 00:15:11.823 "dma_device_type": 1 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.823 "dma_device_type": 2 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "system", 00:15:11.823 "dma_device_type": 1 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.823 "dma_device_type": 2 00:15:11.823 } 00:15:11.823 ], 00:15:11.823 "driver_specific": { 00:15:11.823 "raid": { 00:15:11.823 "uuid": "633a0963-9452-4d49-ace5-dbf4861bb4d8", 00:15:11.823 "strip_size_kb": 0, 00:15:11.823 "state": "online", 00:15:11.823 "raid_level": "raid1", 00:15:11.823 "superblock": true, 00:15:11.823 "num_base_bdevs": 4, 00:15:11.823 "num_base_bdevs_discovered": 4, 00:15:11.823 "num_base_bdevs_operational": 4, 00:15:11.823 "base_bdevs_list": [ 00:15:11.823 { 00:15:11.823 "name": "NewBaseBdev", 00:15:11.823 "uuid": "bf298752-26c7-4574-a90f-0180f3948002", 00:15:11.823 "is_configured": true, 00:15:11.823 "data_offset": 2048, 00:15:11.823 "data_size": 63488 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "name": "BaseBdev2", 00:15:11.823 "uuid": "77e628ba-b3da-456c-876f-bb1db0b3657b", 00:15:11.823 "is_configured": true, 00:15:11.823 "data_offset": 2048, 00:15:11.823 "data_size": 63488 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "name": "BaseBdev3", 00:15:11.823 "uuid": "9fba8929-2dfa-4445-9d3d-85ac30813bd2", 00:15:11.823 "is_configured": true, 00:15:11.823 "data_offset": 2048, 00:15:11.823 "data_size": 63488 00:15:11.823 }, 00:15:11.823 { 00:15:11.823 "name": "BaseBdev4", 00:15:11.823 "uuid": "36049040-6316-4c15-a367-fab8703ae9f3", 00:15:11.823 "is_configured": true, 00:15:11.823 "data_offset": 2048, 00:15:11.823 "data_size": 63488 00:15:11.823 } 00:15:11.823 ] 00:15:11.823 } 00:15:11.823 } 00:15:11.823 }' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:11.823 BaseBdev2 00:15:11.823 BaseBdev3 00:15:11.823 BaseBdev4' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.823 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.082 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.083 [2024-11-26 20:28:05.485028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.083 [2024-11-26 20:28:05.485060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.083 [2024-11-26 20:28:05.485165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.083 [2024-11-26 20:28:05.485507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.083 [2024-11-26 20:28:05.485524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74200 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74200 ']' 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74200 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74200 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.083 killing process with pid 74200 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74200' 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74200 00:15:12.083 [2024-11-26 20:28:05.537165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.083 20:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74200 00:15:12.692 [2024-11-26 20:28:05.973664] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.627 20:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:13.627 00:15:13.627 real 0m12.261s 00:15:13.627 user 0m19.475s 00:15:13.627 sys 0m2.186s 00:15:13.627 20:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.627 ************************************ 00:15:13.627 END TEST raid_state_function_test_sb 00:15:13.627 ************************************ 00:15:13.627 20:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.886 20:28:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:13.886 20:28:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:13.886 20:28:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.886 20:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.886 ************************************ 00:15:13.886 START TEST raid_superblock_test 00:15:13.886 ************************************ 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:13.886 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74876 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74876 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74876 ']' 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.887 20:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.887 [2024-11-26 20:28:07.337603] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:15:13.887 [2024-11-26 20:28:07.337818] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74876 ] 00:15:14.144 [2024-11-26 20:28:07.513961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.144 [2024-11-26 20:28:07.645329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.401 [2024-11-26 20:28:07.858532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.402 [2024-11-26 20:28:07.858692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 malloc1 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 [2024-11-26 20:28:08.280317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.969 [2024-11-26 20:28:08.280374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.969 [2024-11-26 20:28:08.280395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:14.969 [2024-11-26 20:28:08.280405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.969 [2024-11-26 20:28:08.282826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.969 [2024-11-26 20:28:08.282953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.969 pt1 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 malloc2 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.969 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 [2024-11-26 20:28:08.337556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.969 [2024-11-26 20:28:08.337678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.970 [2024-11-26 20:28:08.337732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:14.970 [2024-11-26 20:28:08.337776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.970 [2024-11-26 20:28:08.340154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.970 [2024-11-26 20:28:08.340262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.970 pt2 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 malloc3 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 [2024-11-26 20:28:08.415207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:14.970 [2024-11-26 20:28:08.415339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.970 [2024-11-26 20:28:08.415404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:14.970 [2024-11-26 20:28:08.415441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.970 [2024-11-26 20:28:08.417900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.970 [2024-11-26 20:28:08.417998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:14.970 pt3 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 malloc4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 [2024-11-26 20:28:08.478590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:14.970 [2024-11-26 20:28:08.478650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.970 [2024-11-26 20:28:08.478671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:14.970 [2024-11-26 20:28:08.478681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.970 [2024-11-26 20:28:08.481099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.970 [2024-11-26 20:28:08.481143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:14.970 pt4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 [2024-11-26 20:28:08.490597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.970 [2024-11-26 20:28:08.492634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.970 [2024-11-26 20:28:08.492708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:14.970 [2024-11-26 20:28:08.492810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:14.970 [2024-11-26 20:28:08.493050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:14.970 [2024-11-26 20:28:08.493070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.970 [2024-11-26 20:28:08.493406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:14.970 [2024-11-26 20:28:08.493606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:14.970 [2024-11-26 20:28:08.493624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:14.970 [2024-11-26 20:28:08.493799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.229 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.229 "name": "raid_bdev1", 00:15:15.229 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:15.229 "strip_size_kb": 0, 00:15:15.229 "state": "online", 00:15:15.229 "raid_level": "raid1", 00:15:15.229 "superblock": true, 00:15:15.229 "num_base_bdevs": 4, 00:15:15.229 "num_base_bdevs_discovered": 4, 00:15:15.229 "num_base_bdevs_operational": 4, 00:15:15.229 "base_bdevs_list": [ 00:15:15.229 { 00:15:15.229 "name": "pt1", 00:15:15.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.229 "is_configured": true, 00:15:15.229 "data_offset": 2048, 00:15:15.229 "data_size": 63488 00:15:15.229 }, 00:15:15.229 { 00:15:15.229 "name": "pt2", 00:15:15.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.229 "is_configured": true, 00:15:15.229 "data_offset": 2048, 00:15:15.229 "data_size": 63488 00:15:15.229 }, 00:15:15.229 { 00:15:15.229 "name": "pt3", 00:15:15.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.229 "is_configured": true, 00:15:15.229 "data_offset": 2048, 00:15:15.229 "data_size": 63488 00:15:15.229 }, 00:15:15.229 { 00:15:15.229 "name": "pt4", 00:15:15.229 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:15.229 "is_configured": true, 00:15:15.229 "data_offset": 2048, 00:15:15.229 "data_size": 63488 00:15:15.229 } 00:15:15.229 ] 00:15:15.229 }' 00:15:15.229 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.229 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.490 20:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.490 [2024-11-26 20:28:08.994149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.490 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.490 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:15.490 "name": "raid_bdev1", 00:15:15.490 "aliases": [ 00:15:15.490 "8faa96e2-e9c8-410a-a39b-8e86e6726268" 00:15:15.490 ], 00:15:15.490 "product_name": "Raid Volume", 00:15:15.490 "block_size": 512, 00:15:15.490 "num_blocks": 63488, 00:15:15.490 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:15.490 "assigned_rate_limits": { 00:15:15.490 "rw_ios_per_sec": 0, 00:15:15.490 "rw_mbytes_per_sec": 0, 00:15:15.490 "r_mbytes_per_sec": 0, 00:15:15.490 "w_mbytes_per_sec": 0 00:15:15.490 }, 00:15:15.490 "claimed": false, 00:15:15.490 "zoned": false, 00:15:15.490 "supported_io_types": { 00:15:15.490 "read": true, 00:15:15.490 "write": true, 00:15:15.490 "unmap": false, 00:15:15.490 "flush": false, 00:15:15.490 "reset": true, 00:15:15.490 "nvme_admin": false, 00:15:15.490 "nvme_io": false, 00:15:15.490 "nvme_io_md": false, 00:15:15.490 "write_zeroes": true, 00:15:15.490 "zcopy": false, 00:15:15.490 "get_zone_info": false, 00:15:15.490 "zone_management": false, 00:15:15.490 "zone_append": false, 00:15:15.490 "compare": false, 00:15:15.490 "compare_and_write": false, 00:15:15.490 "abort": false, 00:15:15.490 "seek_hole": false, 00:15:15.490 "seek_data": false, 00:15:15.490 "copy": false, 00:15:15.490 "nvme_iov_md": false 00:15:15.490 }, 00:15:15.490 "memory_domains": [ 00:15:15.490 { 00:15:15.490 "dma_device_id": "system", 00:15:15.490 "dma_device_type": 1 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.490 "dma_device_type": 2 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "system", 00:15:15.490 "dma_device_type": 1 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.490 "dma_device_type": 2 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "system", 00:15:15.490 "dma_device_type": 1 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.490 "dma_device_type": 2 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "system", 00:15:15.490 "dma_device_type": 1 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.490 "dma_device_type": 2 00:15:15.490 } 00:15:15.490 ], 00:15:15.490 "driver_specific": { 00:15:15.490 "raid": { 00:15:15.490 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:15.490 "strip_size_kb": 0, 00:15:15.490 "state": "online", 00:15:15.490 "raid_level": "raid1", 00:15:15.490 "superblock": true, 00:15:15.490 "num_base_bdevs": 4, 00:15:15.490 "num_base_bdevs_discovered": 4, 00:15:15.490 "num_base_bdevs_operational": 4, 00:15:15.490 "base_bdevs_list": [ 00:15:15.490 { 00:15:15.490 "name": "pt1", 00:15:15.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.490 "is_configured": true, 00:15:15.490 "data_offset": 2048, 00:15:15.490 "data_size": 63488 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "name": "pt2", 00:15:15.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.490 "is_configured": true, 00:15:15.490 "data_offset": 2048, 00:15:15.490 "data_size": 63488 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "name": "pt3", 00:15:15.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.490 "is_configured": true, 00:15:15.490 "data_offset": 2048, 00:15:15.490 "data_size": 63488 00:15:15.490 }, 00:15:15.490 { 00:15:15.490 "name": "pt4", 00:15:15.490 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:15.490 "is_configured": true, 00:15:15.490 "data_offset": 2048, 00:15:15.490 "data_size": 63488 00:15:15.490 } 00:15:15.490 ] 00:15:15.490 } 00:15:15.490 } 00:15:15.490 }' 00:15:15.490 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:15.750 pt2 00:15:15.750 pt3 00:15:15.750 pt4' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:15.750 [2024-11-26 20:28:09.285601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.750 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8faa96e2-e9c8-410a-a39b-8e86e6726268 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8faa96e2-e9c8-410a-a39b-8e86e6726268 ']' 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.008 [2024-11-26 20:28:09.333218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.008 [2024-11-26 20:28:09.333260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.008 [2024-11-26 20:28:09.333346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.008 [2024-11-26 20:28:09.333448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.008 [2024-11-26 20:28:09.333465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.008 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 [2024-11-26 20:28:09.496996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:16.009 [2024-11-26 20:28:09.499055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:16.009 [2024-11-26 20:28:09.499106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:16.009 [2024-11-26 20:28:09.499143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:16.009 [2024-11-26 20:28:09.499195] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:16.009 [2024-11-26 20:28:09.499260] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:16.009 [2024-11-26 20:28:09.499281] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:16.009 [2024-11-26 20:28:09.499299] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:16.009 [2024-11-26 20:28:09.499313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.009 [2024-11-26 20:28:09.499324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:16.009 request: 00:15:16.009 { 00:15:16.009 "name": "raid_bdev1", 00:15:16.009 "raid_level": "raid1", 00:15:16.009 "base_bdevs": [ 00:15:16.009 "malloc1", 00:15:16.009 "malloc2", 00:15:16.009 "malloc3", 00:15:16.009 "malloc4" 00:15:16.009 ], 00:15:16.009 "superblock": false, 00:15:16.009 "method": "bdev_raid_create", 00:15:16.009 "req_id": 1 00:15:16.009 } 00:15:16.009 Got JSON-RPC error response 00:15:16.009 response: 00:15:16.009 { 00:15:16.009 "code": -17, 00:15:16.009 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:16.009 } 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.009 [2024-11-26 20:28:09.556861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.009 [2024-11-26 20:28:09.556993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.009 [2024-11-26 20:28:09.557046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:16.009 [2024-11-26 20:28:09.557086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.009 [2024-11-26 20:28:09.559517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.009 [2024-11-26 20:28:09.559605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.009 [2024-11-26 20:28:09.559720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:16.009 [2024-11-26 20:28:09.559833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:16.009 pt1 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:16.009 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.267 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.267 "name": "raid_bdev1", 00:15:16.267 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:16.267 "strip_size_kb": 0, 00:15:16.267 "state": "configuring", 00:15:16.267 "raid_level": "raid1", 00:15:16.267 "superblock": true, 00:15:16.267 "num_base_bdevs": 4, 00:15:16.267 "num_base_bdevs_discovered": 1, 00:15:16.267 "num_base_bdevs_operational": 4, 00:15:16.267 "base_bdevs_list": [ 00:15:16.267 { 00:15:16.267 "name": "pt1", 00:15:16.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.267 "is_configured": true, 00:15:16.267 "data_offset": 2048, 00:15:16.267 "data_size": 63488 00:15:16.267 }, 00:15:16.267 { 00:15:16.267 "name": null, 00:15:16.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.267 "is_configured": false, 00:15:16.267 "data_offset": 2048, 00:15:16.267 "data_size": 63488 00:15:16.267 }, 00:15:16.267 { 00:15:16.267 "name": null, 00:15:16.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.267 "is_configured": false, 00:15:16.267 "data_offset": 2048, 00:15:16.267 "data_size": 63488 00:15:16.267 }, 00:15:16.267 { 00:15:16.267 "name": null, 00:15:16.267 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:16.267 "is_configured": false, 00:15:16.268 "data_offset": 2048, 00:15:16.268 "data_size": 63488 00:15:16.268 } 00:15:16.268 ] 00:15:16.268 }' 00:15:16.268 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.268 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.527 [2024-11-26 20:28:09.988273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.527 [2024-11-26 20:28:09.988408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.527 [2024-11-26 20:28:09.988467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:16.527 [2024-11-26 20:28:09.988525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.527 [2024-11-26 20:28:09.989091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.527 [2024-11-26 20:28:09.989165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.527 [2024-11-26 20:28:09.989306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:16.527 [2024-11-26 20:28:09.989373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.527 pt2 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.527 20:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.527 [2024-11-26 20:28:09.996252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.527 "name": "raid_bdev1", 00:15:16.527 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:16.527 "strip_size_kb": 0, 00:15:16.527 "state": "configuring", 00:15:16.527 "raid_level": "raid1", 00:15:16.527 "superblock": true, 00:15:16.527 "num_base_bdevs": 4, 00:15:16.527 "num_base_bdevs_discovered": 1, 00:15:16.527 "num_base_bdevs_operational": 4, 00:15:16.527 "base_bdevs_list": [ 00:15:16.527 { 00:15:16.527 "name": "pt1", 00:15:16.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.527 "is_configured": true, 00:15:16.527 "data_offset": 2048, 00:15:16.527 "data_size": 63488 00:15:16.527 }, 00:15:16.527 { 00:15:16.527 "name": null, 00:15:16.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.527 "is_configured": false, 00:15:16.527 "data_offset": 0, 00:15:16.527 "data_size": 63488 00:15:16.527 }, 00:15:16.527 { 00:15:16.527 "name": null, 00:15:16.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.527 "is_configured": false, 00:15:16.527 "data_offset": 2048, 00:15:16.527 "data_size": 63488 00:15:16.527 }, 00:15:16.527 { 00:15:16.527 "name": null, 00:15:16.527 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:16.527 "is_configured": false, 00:15:16.527 "data_offset": 2048, 00:15:16.527 "data_size": 63488 00:15:16.527 } 00:15:16.527 ] 00:15:16.527 }' 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.527 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.096 [2024-11-26 20:28:10.475447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.096 [2024-11-26 20:28:10.475566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.096 [2024-11-26 20:28:10.475593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:17.096 [2024-11-26 20:28:10.475602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.096 [2024-11-26 20:28:10.476121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.096 [2024-11-26 20:28:10.476150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.096 [2024-11-26 20:28:10.476264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:17.096 [2024-11-26 20:28:10.476292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.096 pt2 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.096 [2024-11-26 20:28:10.487462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:17.096 [2024-11-26 20:28:10.487537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.096 [2024-11-26 20:28:10.487562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:17.096 [2024-11-26 20:28:10.487573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.096 [2024-11-26 20:28:10.488070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.096 [2024-11-26 20:28:10.488125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:17.096 [2024-11-26 20:28:10.488226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:17.096 [2024-11-26 20:28:10.488268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:17.096 pt3 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.096 [2024-11-26 20:28:10.499407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:17.096 [2024-11-26 20:28:10.499474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.096 [2024-11-26 20:28:10.499495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:17.096 [2024-11-26 20:28:10.499505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.096 [2024-11-26 20:28:10.499994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.096 [2024-11-26 20:28:10.500012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:17.096 [2024-11-26 20:28:10.500098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:17.096 [2024-11-26 20:28:10.500128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:17.096 [2024-11-26 20:28:10.500319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:17.096 [2024-11-26 20:28:10.500330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:17.096 [2024-11-26 20:28:10.500595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:17.096 [2024-11-26 20:28:10.500809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:17.096 [2024-11-26 20:28:10.500840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:17.096 [2024-11-26 20:28:10.501023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.096 pt4 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.096 "name": "raid_bdev1", 00:15:17.096 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:17.096 "strip_size_kb": 0, 00:15:17.096 "state": "online", 00:15:17.096 "raid_level": "raid1", 00:15:17.096 "superblock": true, 00:15:17.096 "num_base_bdevs": 4, 00:15:17.096 "num_base_bdevs_discovered": 4, 00:15:17.096 "num_base_bdevs_operational": 4, 00:15:17.096 "base_bdevs_list": [ 00:15:17.096 { 00:15:17.096 "name": "pt1", 00:15:17.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.096 "is_configured": true, 00:15:17.096 "data_offset": 2048, 00:15:17.096 "data_size": 63488 00:15:17.096 }, 00:15:17.096 { 00:15:17.096 "name": "pt2", 00:15:17.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.096 "is_configured": true, 00:15:17.096 "data_offset": 2048, 00:15:17.096 "data_size": 63488 00:15:17.096 }, 00:15:17.096 { 00:15:17.096 "name": "pt3", 00:15:17.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.096 "is_configured": true, 00:15:17.096 "data_offset": 2048, 00:15:17.096 "data_size": 63488 00:15:17.096 }, 00:15:17.096 { 00:15:17.096 "name": "pt4", 00:15:17.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.096 "is_configured": true, 00:15:17.096 "data_offset": 2048, 00:15:17.096 "data_size": 63488 00:15:17.096 } 00:15:17.096 ] 00:15:17.096 }' 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.096 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.666 20:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.666 [2024-11-26 20:28:11.002971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.666 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.666 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.666 "name": "raid_bdev1", 00:15:17.666 "aliases": [ 00:15:17.666 "8faa96e2-e9c8-410a-a39b-8e86e6726268" 00:15:17.666 ], 00:15:17.666 "product_name": "Raid Volume", 00:15:17.666 "block_size": 512, 00:15:17.666 "num_blocks": 63488, 00:15:17.666 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:17.666 "assigned_rate_limits": { 00:15:17.666 "rw_ios_per_sec": 0, 00:15:17.666 "rw_mbytes_per_sec": 0, 00:15:17.666 "r_mbytes_per_sec": 0, 00:15:17.666 "w_mbytes_per_sec": 0 00:15:17.666 }, 00:15:17.666 "claimed": false, 00:15:17.666 "zoned": false, 00:15:17.666 "supported_io_types": { 00:15:17.666 "read": true, 00:15:17.666 "write": true, 00:15:17.666 "unmap": false, 00:15:17.666 "flush": false, 00:15:17.666 "reset": true, 00:15:17.666 "nvme_admin": false, 00:15:17.666 "nvme_io": false, 00:15:17.666 "nvme_io_md": false, 00:15:17.666 "write_zeroes": true, 00:15:17.666 "zcopy": false, 00:15:17.666 "get_zone_info": false, 00:15:17.666 "zone_management": false, 00:15:17.666 "zone_append": false, 00:15:17.666 "compare": false, 00:15:17.666 "compare_and_write": false, 00:15:17.666 "abort": false, 00:15:17.666 "seek_hole": false, 00:15:17.666 "seek_data": false, 00:15:17.666 "copy": false, 00:15:17.666 "nvme_iov_md": false 00:15:17.666 }, 00:15:17.666 "memory_domains": [ 00:15:17.666 { 00:15:17.666 "dma_device_id": "system", 00:15:17.666 "dma_device_type": 1 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.666 "dma_device_type": 2 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "system", 00:15:17.666 "dma_device_type": 1 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.666 "dma_device_type": 2 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "system", 00:15:17.666 "dma_device_type": 1 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.666 "dma_device_type": 2 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "system", 00:15:17.666 "dma_device_type": 1 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.666 "dma_device_type": 2 00:15:17.666 } 00:15:17.666 ], 00:15:17.666 "driver_specific": { 00:15:17.666 "raid": { 00:15:17.666 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:17.666 "strip_size_kb": 0, 00:15:17.666 "state": "online", 00:15:17.666 "raid_level": "raid1", 00:15:17.666 "superblock": true, 00:15:17.666 "num_base_bdevs": 4, 00:15:17.666 "num_base_bdevs_discovered": 4, 00:15:17.666 "num_base_bdevs_operational": 4, 00:15:17.666 "base_bdevs_list": [ 00:15:17.666 { 00:15:17.666 "name": "pt1", 00:15:17.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.666 "is_configured": true, 00:15:17.666 "data_offset": 2048, 00:15:17.666 "data_size": 63488 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "name": "pt2", 00:15:17.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.666 "is_configured": true, 00:15:17.666 "data_offset": 2048, 00:15:17.666 "data_size": 63488 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "name": "pt3", 00:15:17.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.666 "is_configured": true, 00:15:17.666 "data_offset": 2048, 00:15:17.666 "data_size": 63488 00:15:17.666 }, 00:15:17.666 { 00:15:17.666 "name": "pt4", 00:15:17.666 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.666 "is_configured": true, 00:15:17.666 "data_offset": 2048, 00:15:17.666 "data_size": 63488 00:15:17.666 } 00:15:17.666 ] 00:15:17.666 } 00:15:17.666 } 00:15:17.666 }' 00:15:17.666 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.666 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:17.666 pt2 00:15:17.666 pt3 00:15:17.667 pt4' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.667 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:17.927 [2024-11-26 20:28:11.334421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8faa96e2-e9c8-410a-a39b-8e86e6726268 '!=' 8faa96e2-e9c8-410a-a39b-8e86e6726268 ']' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 [2024-11-26 20:28:11.382012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.927 "name": "raid_bdev1", 00:15:17.927 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:17.927 "strip_size_kb": 0, 00:15:17.927 "state": "online", 00:15:17.927 "raid_level": "raid1", 00:15:17.927 "superblock": true, 00:15:17.927 "num_base_bdevs": 4, 00:15:17.927 "num_base_bdevs_discovered": 3, 00:15:17.927 "num_base_bdevs_operational": 3, 00:15:17.927 "base_bdevs_list": [ 00:15:17.927 { 00:15:17.927 "name": null, 00:15:17.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.927 "is_configured": false, 00:15:17.927 "data_offset": 0, 00:15:17.927 "data_size": 63488 00:15:17.927 }, 00:15:17.927 { 00:15:17.927 "name": "pt2", 00:15:17.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.927 "is_configured": true, 00:15:17.927 "data_offset": 2048, 00:15:17.927 "data_size": 63488 00:15:17.927 }, 00:15:17.927 { 00:15:17.927 "name": "pt3", 00:15:17.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.927 "is_configured": true, 00:15:17.927 "data_offset": 2048, 00:15:17.927 "data_size": 63488 00:15:17.927 }, 00:15:17.927 { 00:15:17.927 "name": "pt4", 00:15:17.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.927 "is_configured": true, 00:15:17.927 "data_offset": 2048, 00:15:17.927 "data_size": 63488 00:15:17.927 } 00:15:17.927 ] 00:15:17.927 }' 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.927 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.505 [2024-11-26 20:28:11.841228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.505 [2024-11-26 20:28:11.841373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.505 [2024-11-26 20:28:11.841497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.505 [2024-11-26 20:28:11.841629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.505 [2024-11-26 20:28:11.841684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:18.505 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 [2024-11-26 20:28:11.937014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.506 [2024-11-26 20:28:11.937151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.506 [2024-11-26 20:28:11.937198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:18.506 [2024-11-26 20:28:11.937230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.506 [2024-11-26 20:28:11.939529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.506 [2024-11-26 20:28:11.939611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.506 [2024-11-26 20:28:11.939729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.506 [2024-11-26 20:28:11.939810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.506 pt2 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.506 "name": "raid_bdev1", 00:15:18.506 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:18.506 "strip_size_kb": 0, 00:15:18.506 "state": "configuring", 00:15:18.506 "raid_level": "raid1", 00:15:18.506 "superblock": true, 00:15:18.506 "num_base_bdevs": 4, 00:15:18.506 "num_base_bdevs_discovered": 1, 00:15:18.506 "num_base_bdevs_operational": 3, 00:15:18.506 "base_bdevs_list": [ 00:15:18.506 { 00:15:18.506 "name": null, 00:15:18.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.506 "is_configured": false, 00:15:18.506 "data_offset": 2048, 00:15:18.506 "data_size": 63488 00:15:18.506 }, 00:15:18.506 { 00:15:18.506 "name": "pt2", 00:15:18.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.506 "is_configured": true, 00:15:18.506 "data_offset": 2048, 00:15:18.506 "data_size": 63488 00:15:18.506 }, 00:15:18.506 { 00:15:18.506 "name": null, 00:15:18.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.506 "is_configured": false, 00:15:18.506 "data_offset": 2048, 00:15:18.506 "data_size": 63488 00:15:18.506 }, 00:15:18.506 { 00:15:18.506 "name": null, 00:15:18.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:18.506 "is_configured": false, 00:15:18.506 "data_offset": 2048, 00:15:18.506 "data_size": 63488 00:15:18.506 } 00:15:18.506 ] 00:15:18.506 }' 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.506 20:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 [2024-11-26 20:28:12.436431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:19.104 [2024-11-26 20:28:12.436505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.104 [2024-11-26 20:28:12.436529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:19.104 [2024-11-26 20:28:12.436540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.104 [2024-11-26 20:28:12.437087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.104 [2024-11-26 20:28:12.437112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:19.104 [2024-11-26 20:28:12.437206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:19.104 [2024-11-26 20:28:12.437230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:19.104 pt3 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.104 "name": "raid_bdev1", 00:15:19.104 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:19.104 "strip_size_kb": 0, 00:15:19.104 "state": "configuring", 00:15:19.104 "raid_level": "raid1", 00:15:19.104 "superblock": true, 00:15:19.104 "num_base_bdevs": 4, 00:15:19.104 "num_base_bdevs_discovered": 2, 00:15:19.104 "num_base_bdevs_operational": 3, 00:15:19.104 "base_bdevs_list": [ 00:15:19.104 { 00:15:19.104 "name": null, 00:15:19.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.104 "is_configured": false, 00:15:19.104 "data_offset": 2048, 00:15:19.104 "data_size": 63488 00:15:19.104 }, 00:15:19.104 { 00:15:19.104 "name": "pt2", 00:15:19.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.104 "is_configured": true, 00:15:19.104 "data_offset": 2048, 00:15:19.104 "data_size": 63488 00:15:19.104 }, 00:15:19.104 { 00:15:19.104 "name": "pt3", 00:15:19.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.104 "is_configured": true, 00:15:19.104 "data_offset": 2048, 00:15:19.104 "data_size": 63488 00:15:19.104 }, 00:15:19.104 { 00:15:19.104 "name": null, 00:15:19.104 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.104 "is_configured": false, 00:15:19.104 "data_offset": 2048, 00:15:19.104 "data_size": 63488 00:15:19.104 } 00:15:19.104 ] 00:15:19.104 }' 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.104 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.364 [2024-11-26 20:28:12.891688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:19.364 [2024-11-26 20:28:12.891879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.364 [2024-11-26 20:28:12.891948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:19.364 [2024-11-26 20:28:12.891987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.364 [2024-11-26 20:28:12.892564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.364 [2024-11-26 20:28:12.892632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:19.364 [2024-11-26 20:28:12.892815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:19.364 [2024-11-26 20:28:12.892903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:19.364 [2024-11-26 20:28:12.893111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:19.364 [2024-11-26 20:28:12.893156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.364 [2024-11-26 20:28:12.893494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:19.364 [2024-11-26 20:28:12.893719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:19.364 [2024-11-26 20:28:12.893773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:19.364 [2024-11-26 20:28:12.893997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.364 pt4 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.364 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.624 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.624 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.624 "name": "raid_bdev1", 00:15:19.624 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:19.624 "strip_size_kb": 0, 00:15:19.624 "state": "online", 00:15:19.624 "raid_level": "raid1", 00:15:19.624 "superblock": true, 00:15:19.624 "num_base_bdevs": 4, 00:15:19.624 "num_base_bdevs_discovered": 3, 00:15:19.624 "num_base_bdevs_operational": 3, 00:15:19.624 "base_bdevs_list": [ 00:15:19.624 { 00:15:19.624 "name": null, 00:15:19.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.624 "is_configured": false, 00:15:19.624 "data_offset": 2048, 00:15:19.624 "data_size": 63488 00:15:19.624 }, 00:15:19.624 { 00:15:19.624 "name": "pt2", 00:15:19.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.624 "is_configured": true, 00:15:19.624 "data_offset": 2048, 00:15:19.624 "data_size": 63488 00:15:19.624 }, 00:15:19.624 { 00:15:19.624 "name": "pt3", 00:15:19.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.624 "is_configured": true, 00:15:19.624 "data_offset": 2048, 00:15:19.624 "data_size": 63488 00:15:19.624 }, 00:15:19.624 { 00:15:19.624 "name": "pt4", 00:15:19.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.624 "is_configured": true, 00:15:19.624 "data_offset": 2048, 00:15:19.624 "data_size": 63488 00:15:19.624 } 00:15:19.624 ] 00:15:19.624 }' 00:15:19.624 20:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.624 20:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.883 [2024-11-26 20:28:13.382777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.883 [2024-11-26 20:28:13.382888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.883 [2024-11-26 20:28:13.382993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.883 [2024-11-26 20:28:13.383097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.883 [2024-11-26 20:28:13.383147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:19.883 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.141 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.142 [2024-11-26 20:28:13.462643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.142 [2024-11-26 20:28:13.462783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.142 [2024-11-26 20:28:13.462833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:20.142 [2024-11-26 20:28:13.462873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.142 [2024-11-26 20:28:13.465381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.142 [2024-11-26 20:28:13.465472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.142 [2024-11-26 20:28:13.465598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.142 [2024-11-26 20:28:13.465687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.142 [2024-11-26 20:28:13.465895] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:20.142 [2024-11-26 20:28:13.465963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.142 [2024-11-26 20:28:13.466029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:20.142 [2024-11-26 20:28:13.466134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.142 [2024-11-26 20:28:13.466302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.142 pt1 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.142 "name": "raid_bdev1", 00:15:20.142 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:20.142 "strip_size_kb": 0, 00:15:20.142 "state": "configuring", 00:15:20.142 "raid_level": "raid1", 00:15:20.142 "superblock": true, 00:15:20.142 "num_base_bdevs": 4, 00:15:20.142 "num_base_bdevs_discovered": 2, 00:15:20.142 "num_base_bdevs_operational": 3, 00:15:20.142 "base_bdevs_list": [ 00:15:20.142 { 00:15:20.142 "name": null, 00:15:20.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.142 "is_configured": false, 00:15:20.142 "data_offset": 2048, 00:15:20.142 "data_size": 63488 00:15:20.142 }, 00:15:20.142 { 00:15:20.142 "name": "pt2", 00:15:20.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.142 "is_configured": true, 00:15:20.142 "data_offset": 2048, 00:15:20.142 "data_size": 63488 00:15:20.142 }, 00:15:20.142 { 00:15:20.142 "name": "pt3", 00:15:20.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.142 "is_configured": true, 00:15:20.142 "data_offset": 2048, 00:15:20.142 "data_size": 63488 00:15:20.142 }, 00:15:20.142 { 00:15:20.142 "name": null, 00:15:20.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.142 "is_configured": false, 00:15:20.142 "data_offset": 2048, 00:15:20.142 "data_size": 63488 00:15:20.142 } 00:15:20.142 ] 00:15:20.142 }' 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.142 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.401 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:20.401 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.401 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.401 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:20.659 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.659 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:20.659 20:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:20.659 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.660 20:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.660 [2024-11-26 20:28:14.001805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:20.660 [2024-11-26 20:28:14.001888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.660 [2024-11-26 20:28:14.001913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:20.660 [2024-11-26 20:28:14.001924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.660 [2024-11-26 20:28:14.002446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.660 [2024-11-26 20:28:14.002468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:20.660 [2024-11-26 20:28:14.002566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:20.660 [2024-11-26 20:28:14.002590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:20.660 [2024-11-26 20:28:14.002736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:20.660 [2024-11-26 20:28:14.002746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.660 [2024-11-26 20:28:14.003032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:20.660 [2024-11-26 20:28:14.003200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:20.660 [2024-11-26 20:28:14.003224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:20.660 [2024-11-26 20:28:14.003421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.660 pt4 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.660 "name": "raid_bdev1", 00:15:20.660 "uuid": "8faa96e2-e9c8-410a-a39b-8e86e6726268", 00:15:20.660 "strip_size_kb": 0, 00:15:20.660 "state": "online", 00:15:20.660 "raid_level": "raid1", 00:15:20.660 "superblock": true, 00:15:20.660 "num_base_bdevs": 4, 00:15:20.660 "num_base_bdevs_discovered": 3, 00:15:20.660 "num_base_bdevs_operational": 3, 00:15:20.660 "base_bdevs_list": [ 00:15:20.660 { 00:15:20.660 "name": null, 00:15:20.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.660 "is_configured": false, 00:15:20.660 "data_offset": 2048, 00:15:20.660 "data_size": 63488 00:15:20.660 }, 00:15:20.660 { 00:15:20.660 "name": "pt2", 00:15:20.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.660 "is_configured": true, 00:15:20.660 "data_offset": 2048, 00:15:20.660 "data_size": 63488 00:15:20.660 }, 00:15:20.660 { 00:15:20.660 "name": "pt3", 00:15:20.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.660 "is_configured": true, 00:15:20.660 "data_offset": 2048, 00:15:20.660 "data_size": 63488 00:15:20.660 }, 00:15:20.660 { 00:15:20.660 "name": "pt4", 00:15:20.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.660 "is_configured": true, 00:15:20.660 "data_offset": 2048, 00:15:20.660 "data_size": 63488 00:15:20.660 } 00:15:20.660 ] 00:15:20.660 }' 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.660 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.919 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:20.919 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.919 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.919 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:20.919 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.178 [2024-11-26 20:28:14.501276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8faa96e2-e9c8-410a-a39b-8e86e6726268 '!=' 8faa96e2-e9c8-410a-a39b-8e86e6726268 ']' 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74876 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74876 ']' 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74876 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74876 00:15:21.178 killing process with pid 74876 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74876' 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74876 00:15:21.178 [2024-11-26 20:28:14.585459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.178 [2024-11-26 20:28:14.585567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.178 20:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74876 00:15:21.178 [2024-11-26 20:28:14.585656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.178 [2024-11-26 20:28:14.585672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:21.745 [2024-11-26 20:28:15.024540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.798 20:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:22.798 00:15:22.798 real 0m8.975s 00:15:22.798 user 0m14.010s 00:15:22.798 sys 0m1.702s 00:15:22.798 20:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.798 20:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.798 ************************************ 00:15:22.798 END TEST raid_superblock_test 00:15:22.798 ************************************ 00:15:22.798 20:28:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:22.798 20:28:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.798 20:28:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.798 20:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.798 ************************************ 00:15:22.798 START TEST raid_read_error_test 00:15:22.798 ************************************ 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.798 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XgpBEkWjlk 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75369 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75369 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75369 ']' 00:15:22.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.799 20:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 [2024-11-26 20:28:16.409826] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:15:23.058 [2024-11-26 20:28:16.410179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75369 ] 00:15:23.058 [2024-11-26 20:28:16.590376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.317 [2024-11-26 20:28:16.713272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.576 [2024-11-26 20:28:16.920158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.576 [2024-11-26 20:28:16.920199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.836 BaseBdev1_malloc 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.836 true 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.836 [2024-11-26 20:28:17.352920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:23.836 [2024-11-26 20:28:17.353020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.836 [2024-11-26 20:28:17.353057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:23.836 [2024-11-26 20:28:17.353077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.836 [2024-11-26 20:28:17.355932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.836 [2024-11-26 20:28:17.355986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.836 BaseBdev1 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.836 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.095 BaseBdev2_malloc 00:15:24.095 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.095 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 true 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 [2024-11-26 20:28:17.424521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:24.096 [2024-11-26 20:28:17.424587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.096 [2024-11-26 20:28:17.424604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:24.096 [2024-11-26 20:28:17.424615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.096 [2024-11-26 20:28:17.426961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.096 [2024-11-26 20:28:17.427005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:24.096 BaseBdev2 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 BaseBdev3_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 true 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 [2024-11-26 20:28:17.507699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:24.096 [2024-11-26 20:28:17.507768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.096 [2024-11-26 20:28:17.507787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:24.096 [2024-11-26 20:28:17.507799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.096 [2024-11-26 20:28:17.510197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.096 [2024-11-26 20:28:17.510346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:24.096 BaseBdev3 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 BaseBdev4_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 true 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 [2024-11-26 20:28:17.578744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:24.096 [2024-11-26 20:28:17.578810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.096 [2024-11-26 20:28:17.578828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:24.096 [2024-11-26 20:28:17.578839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.096 [2024-11-26 20:28:17.581042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.096 [2024-11-26 20:28:17.581097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:24.096 BaseBdev4 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 [2024-11-26 20:28:17.590776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.096 [2024-11-26 20:28:17.592691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.096 [2024-11-26 20:28:17.592793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.096 [2024-11-26 20:28:17.592883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:24.096 [2024-11-26 20:28:17.593157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:24.096 [2024-11-26 20:28:17.593174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:24.096 [2024-11-26 20:28:17.593467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:24.096 [2024-11-26 20:28:17.593650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:24.096 [2024-11-26 20:28:17.593661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:24.096 [2024-11-26 20:28:17.593863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.355 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.355 "name": "raid_bdev1", 00:15:24.355 "uuid": "f6109d27-97b6-47f5-8299-98b29dd9518e", 00:15:24.355 "strip_size_kb": 0, 00:15:24.355 "state": "online", 00:15:24.355 "raid_level": "raid1", 00:15:24.355 "superblock": true, 00:15:24.355 "num_base_bdevs": 4, 00:15:24.355 "num_base_bdevs_discovered": 4, 00:15:24.355 "num_base_bdevs_operational": 4, 00:15:24.355 "base_bdevs_list": [ 00:15:24.356 { 00:15:24.356 "name": "BaseBdev1", 00:15:24.356 "uuid": "b8ec2bc9-f834-559f-97c6-f0899f9622dd", 00:15:24.356 "is_configured": true, 00:15:24.356 "data_offset": 2048, 00:15:24.356 "data_size": 63488 00:15:24.356 }, 00:15:24.356 { 00:15:24.356 "name": "BaseBdev2", 00:15:24.356 "uuid": "bd9f3601-1c76-5290-9219-c467bc389b9a", 00:15:24.356 "is_configured": true, 00:15:24.356 "data_offset": 2048, 00:15:24.356 "data_size": 63488 00:15:24.356 }, 00:15:24.356 { 00:15:24.356 "name": "BaseBdev3", 00:15:24.356 "uuid": "f20bb6e4-f061-5806-ac83-7d8fc34c3a1c", 00:15:24.356 "is_configured": true, 00:15:24.356 "data_offset": 2048, 00:15:24.356 "data_size": 63488 00:15:24.356 }, 00:15:24.356 { 00:15:24.356 "name": "BaseBdev4", 00:15:24.356 "uuid": "166321fc-e90c-5642-a1b4-8f938d1bc570", 00:15:24.356 "is_configured": true, 00:15:24.356 "data_offset": 2048, 00:15:24.356 "data_size": 63488 00:15:24.356 } 00:15:24.356 ] 00:15:24.356 }' 00:15:24.356 20:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.356 20:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 20:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:24.758 20:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:24.758 [2024-11-26 20:28:18.147356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:25.695 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:25.695 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.695 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.695 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.695 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:25.695 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.696 "name": "raid_bdev1", 00:15:25.696 "uuid": "f6109d27-97b6-47f5-8299-98b29dd9518e", 00:15:25.696 "strip_size_kb": 0, 00:15:25.696 "state": "online", 00:15:25.696 "raid_level": "raid1", 00:15:25.696 "superblock": true, 00:15:25.696 "num_base_bdevs": 4, 00:15:25.696 "num_base_bdevs_discovered": 4, 00:15:25.696 "num_base_bdevs_operational": 4, 00:15:25.696 "base_bdevs_list": [ 00:15:25.696 { 00:15:25.696 "name": "BaseBdev1", 00:15:25.696 "uuid": "b8ec2bc9-f834-559f-97c6-f0899f9622dd", 00:15:25.696 "is_configured": true, 00:15:25.696 "data_offset": 2048, 00:15:25.696 "data_size": 63488 00:15:25.696 }, 00:15:25.696 { 00:15:25.696 "name": "BaseBdev2", 00:15:25.696 "uuid": "bd9f3601-1c76-5290-9219-c467bc389b9a", 00:15:25.696 "is_configured": true, 00:15:25.696 "data_offset": 2048, 00:15:25.696 "data_size": 63488 00:15:25.696 }, 00:15:25.696 { 00:15:25.696 "name": "BaseBdev3", 00:15:25.696 "uuid": "f20bb6e4-f061-5806-ac83-7d8fc34c3a1c", 00:15:25.696 "is_configured": true, 00:15:25.696 "data_offset": 2048, 00:15:25.696 "data_size": 63488 00:15:25.696 }, 00:15:25.696 { 00:15:25.696 "name": "BaseBdev4", 00:15:25.696 "uuid": "166321fc-e90c-5642-a1b4-8f938d1bc570", 00:15:25.696 "is_configured": true, 00:15:25.696 "data_offset": 2048, 00:15:25.696 "data_size": 63488 00:15:25.696 } 00:15:25.696 ] 00:15:25.696 }' 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.696 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.262 [2024-11-26 20:28:19.548447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.262 [2024-11-26 20:28:19.548497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.262 [2024-11-26 20:28:19.551299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.262 [2024-11-26 20:28:19.551381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.262 [2024-11-26 20:28:19.551514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.262 [2024-11-26 20:28:19.551527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:26.262 { 00:15:26.262 "results": [ 00:15:26.262 { 00:15:26.262 "job": "raid_bdev1", 00:15:26.262 "core_mask": "0x1", 00:15:26.262 "workload": "randrw", 00:15:26.262 "percentage": 50, 00:15:26.262 "status": "finished", 00:15:26.262 "queue_depth": 1, 00:15:26.262 "io_size": 131072, 00:15:26.262 "runtime": 1.401595, 00:15:26.262 "iops": 9890.160852457378, 00:15:26.262 "mibps": 1236.2701065571723, 00:15:26.262 "io_failed": 0, 00:15:26.262 "io_timeout": 0, 00:15:26.262 "avg_latency_us": 98.13848118603904, 00:15:26.262 "min_latency_us": 24.929257641921396, 00:15:26.262 "max_latency_us": 1667.0183406113538 00:15:26.262 } 00:15:26.262 ], 00:15:26.262 "core_count": 1 00:15:26.262 } 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75369 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75369 ']' 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75369 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75369 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75369' 00:15:26.262 killing process with pid 75369 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75369 00:15:26.262 [2024-11-26 20:28:19.593660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.262 20:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75369 00:15:26.523 [2024-11-26 20:28:19.957564] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XgpBEkWjlk 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:27.900 20:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:27.900 00:15:27.900 real 0m4.929s 00:15:27.900 user 0m5.833s 00:15:27.900 sys 0m0.614s 00:15:27.901 ************************************ 00:15:27.901 END TEST raid_read_error_test 00:15:27.901 ************************************ 00:15:27.901 20:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.901 20:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.901 20:28:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:27.901 20:28:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:27.901 20:28:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.901 20:28:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.901 ************************************ 00:15:27.901 START TEST raid_write_error_test 00:15:27.901 ************************************ 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2TlhotD6n5 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75515 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75515 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75515 ']' 00:15:27.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.901 20:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.901 [2024-11-26 20:28:21.418439] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:15:27.901 [2024-11-26 20:28:21.418583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:15:28.160 [2024-11-26 20:28:21.603099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.418 [2024-11-26 20:28:21.730704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.418 [2024-11-26 20:28:21.952962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.418 [2024-11-26 20:28:21.953031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 BaseBdev1_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 true 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 [2024-11-26 20:28:22.330230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:28.986 [2024-11-26 20:28:22.330312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.986 [2024-11-26 20:28:22.330332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:28.986 [2024-11-26 20:28:22.330344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.986 [2024-11-26 20:28:22.332547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.986 [2024-11-26 20:28:22.332590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.986 BaseBdev1 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 BaseBdev2_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 true 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 [2024-11-26 20:28:22.399804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:28.986 [2024-11-26 20:28:22.399868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.986 [2024-11-26 20:28:22.399885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:28.986 [2024-11-26 20:28:22.399896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.986 [2024-11-26 20:28:22.402200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.986 [2024-11-26 20:28:22.402269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:28.986 BaseBdev2 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 BaseBdev3_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 true 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 [2024-11-26 20:28:22.481271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:28.986 [2024-11-26 20:28:22.481337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.986 [2024-11-26 20:28:22.481355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:28.986 [2024-11-26 20:28:22.481366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.986 [2024-11-26 20:28:22.483475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.986 [2024-11-26 20:28:22.483518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:28.986 BaseBdev3 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 BaseBdev4_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.986 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.246 true 00:15:29.246 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.246 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:29.246 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.246 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.246 [2024-11-26 20:28:22.551689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:29.246 [2024-11-26 20:28:22.551752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.246 [2024-11-26 20:28:22.551771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:29.246 [2024-11-26 20:28:22.551782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.246 [2024-11-26 20:28:22.554010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.246 [2024-11-26 20:28:22.554137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:29.246 BaseBdev4 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.247 [2024-11-26 20:28:22.563739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.247 [2024-11-26 20:28:22.565818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.247 [2024-11-26 20:28:22.565904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.247 [2024-11-26 20:28:22.565974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.247 [2024-11-26 20:28:22.566238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:29.247 [2024-11-26 20:28:22.566272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:29.247 [2024-11-26 20:28:22.566558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:29.247 [2024-11-26 20:28:22.566765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:29.247 [2024-11-26 20:28:22.566776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:29.247 [2024-11-26 20:28:22.566959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.247 "name": "raid_bdev1", 00:15:29.247 "uuid": "a412ce8b-4610-4494-a67c-fb9156d8dddb", 00:15:29.247 "strip_size_kb": 0, 00:15:29.247 "state": "online", 00:15:29.247 "raid_level": "raid1", 00:15:29.247 "superblock": true, 00:15:29.247 "num_base_bdevs": 4, 00:15:29.247 "num_base_bdevs_discovered": 4, 00:15:29.247 "num_base_bdevs_operational": 4, 00:15:29.247 "base_bdevs_list": [ 00:15:29.247 { 00:15:29.247 "name": "BaseBdev1", 00:15:29.247 "uuid": "b9676ef0-6201-5e27-9fdf-01e959a53a3f", 00:15:29.247 "is_configured": true, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 }, 00:15:29.247 { 00:15:29.247 "name": "BaseBdev2", 00:15:29.247 "uuid": "cf819e4e-e06b-590b-a6e2-55f25f28d5ce", 00:15:29.247 "is_configured": true, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 }, 00:15:29.247 { 00:15:29.247 "name": "BaseBdev3", 00:15:29.247 "uuid": "cfe896b7-b073-5def-8834-511ff4dae3c1", 00:15:29.247 "is_configured": true, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 }, 00:15:29.247 { 00:15:29.247 "name": "BaseBdev4", 00:15:29.247 "uuid": "c8f6865b-b6bf-5018-b200-1140070ed9bc", 00:15:29.247 "is_configured": true, 00:15:29.247 "data_offset": 2048, 00:15:29.247 "data_size": 63488 00:15:29.247 } 00:15:29.247 ] 00:15:29.247 }' 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.247 20:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 20:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:29.506 20:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:29.765 [2024-11-26 20:28:23.144203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.702 [2024-11-26 20:28:24.053206] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:30.702 [2024-11-26 20:28:24.053416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.702 [2024-11-26 20:28:24.053732] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.702 "name": "raid_bdev1", 00:15:30.702 "uuid": "a412ce8b-4610-4494-a67c-fb9156d8dddb", 00:15:30.702 "strip_size_kb": 0, 00:15:30.702 "state": "online", 00:15:30.702 "raid_level": "raid1", 00:15:30.702 "superblock": true, 00:15:30.702 "num_base_bdevs": 4, 00:15:30.702 "num_base_bdevs_discovered": 3, 00:15:30.702 "num_base_bdevs_operational": 3, 00:15:30.702 "base_bdevs_list": [ 00:15:30.702 { 00:15:30.702 "name": null, 00:15:30.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.702 "is_configured": false, 00:15:30.702 "data_offset": 0, 00:15:30.702 "data_size": 63488 00:15:30.702 }, 00:15:30.702 { 00:15:30.702 "name": "BaseBdev2", 00:15:30.702 "uuid": "cf819e4e-e06b-590b-a6e2-55f25f28d5ce", 00:15:30.702 "is_configured": true, 00:15:30.702 "data_offset": 2048, 00:15:30.702 "data_size": 63488 00:15:30.702 }, 00:15:30.702 { 00:15:30.702 "name": "BaseBdev3", 00:15:30.702 "uuid": "cfe896b7-b073-5def-8834-511ff4dae3c1", 00:15:30.702 "is_configured": true, 00:15:30.702 "data_offset": 2048, 00:15:30.702 "data_size": 63488 00:15:30.702 }, 00:15:30.702 { 00:15:30.702 "name": "BaseBdev4", 00:15:30.702 "uuid": "c8f6865b-b6bf-5018-b200-1140070ed9bc", 00:15:30.702 "is_configured": true, 00:15:30.702 "data_offset": 2048, 00:15:30.702 "data_size": 63488 00:15:30.702 } 00:15:30.702 ] 00:15:30.702 }' 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.702 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.962 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.962 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.962 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.962 [2024-11-26 20:28:24.513955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.962 [2024-11-26 20:28:24.514093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.222 [2024-11-26 20:28:24.517007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.222 [2024-11-26 20:28:24.517114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.222 [2024-11-26 20:28:24.517261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.222 [2024-11-26 20:28:24.517316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:31.222 { 00:15:31.222 "results": [ 00:15:31.222 { 00:15:31.222 "job": "raid_bdev1", 00:15:31.222 "core_mask": "0x1", 00:15:31.222 "workload": "randrw", 00:15:31.222 "percentage": 50, 00:15:31.222 "status": "finished", 00:15:31.222 "queue_depth": 1, 00:15:31.222 "io_size": 131072, 00:15:31.222 "runtime": 1.37034, 00:15:31.222 "iops": 10692.236963089452, 00:15:31.222 "mibps": 1336.5296203861815, 00:15:31.222 "io_failed": 0, 00:15:31.222 "io_timeout": 0, 00:15:31.222 "avg_latency_us": 90.566458161218, 00:15:31.222 "min_latency_us": 24.482096069868994, 00:15:31.222 "max_latency_us": 1845.8829694323144 00:15:31.222 } 00:15:31.222 ], 00:15:31.222 "core_count": 1 00:15:31.222 } 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75515 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75515 ']' 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75515 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75515 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.222 killing process with pid 75515 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75515' 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75515 00:15:31.222 [2024-11-26 20:28:24.555825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.222 20:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75515 00:15:31.482 [2024-11-26 20:28:24.905815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2TlhotD6n5 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:32.877 ************************************ 00:15:32.877 END TEST raid_write_error_test 00:15:32.877 ************************************ 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:32.877 00:15:32.877 real 0m4.886s 00:15:32.877 user 0m5.729s 00:15:32.877 sys 0m0.638s 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.877 20:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.877 20:28:26 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:32.877 20:28:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:32.877 20:28:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:32.877 20:28:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:32.877 20:28:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.877 20:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.877 ************************************ 00:15:32.877 START TEST raid_rebuild_test 00:15:32.877 ************************************ 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75664 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75664 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75664 ']' 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.877 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.878 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.878 20:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.878 [2024-11-26 20:28:26.372032] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:15:32.878 [2024-11-26 20:28:26.372299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.878 Zero copy mechanism will not be used. 00:15:32.878 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75664 ] 00:15:33.137 [2024-11-26 20:28:26.553758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.137 [2024-11-26 20:28:26.669484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.397 [2024-11-26 20:28:26.879606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.397 [2024-11-26 20:28:26.879777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 BaseBdev1_malloc 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 [2024-11-26 20:28:27.289541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.966 [2024-11-26 20:28:27.289669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.966 [2024-11-26 20:28:27.289718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.966 [2024-11-26 20:28:27.289735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.966 [2024-11-26 20:28:27.291889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.966 [2024-11-26 20:28:27.291934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.966 BaseBdev1 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 BaseBdev2_malloc 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 [2024-11-26 20:28:27.346838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.966 [2024-11-26 20:28:27.346961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.966 [2024-11-26 20:28:27.347005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.966 [2024-11-26 20:28:27.347049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.966 [2024-11-26 20:28:27.349210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.966 [2024-11-26 20:28:27.349304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.966 BaseBdev2 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 spare_malloc 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 spare_delay 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 [2024-11-26 20:28:27.430156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.966 [2024-11-26 20:28:27.430284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.966 [2024-11-26 20:28:27.430331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:33.966 [2024-11-26 20:28:27.430346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.966 [2024-11-26 20:28:27.432874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.966 spare 00:15:33.966 [2024-11-26 20:28:27.433002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.966 [2024-11-26 20:28:27.442198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.966 [2024-11-26 20:28:27.444351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.966 [2024-11-26 20:28:27.444512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:33.966 [2024-11-26 20:28:27.444565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:33.966 [2024-11-26 20:28:27.444923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:33.966 [2024-11-26 20:28:27.445172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:33.966 [2024-11-26 20:28:27.445224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:33.966 [2024-11-26 20:28:27.445480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.966 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.967 "name": "raid_bdev1", 00:15:33.967 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:33.967 "strip_size_kb": 0, 00:15:33.967 "state": "online", 00:15:33.967 "raid_level": "raid1", 00:15:33.967 "superblock": false, 00:15:33.967 "num_base_bdevs": 2, 00:15:33.967 "num_base_bdevs_discovered": 2, 00:15:33.967 "num_base_bdevs_operational": 2, 00:15:33.967 "base_bdevs_list": [ 00:15:33.967 { 00:15:33.967 "name": "BaseBdev1", 00:15:33.967 "uuid": "dd21ea9c-67f8-5df9-b8cf-a3de28a5045c", 00:15:33.967 "is_configured": true, 00:15:33.967 "data_offset": 0, 00:15:33.967 "data_size": 65536 00:15:33.967 }, 00:15:33.967 { 00:15:33.967 "name": "BaseBdev2", 00:15:33.967 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:33.967 "is_configured": true, 00:15:33.967 "data_offset": 0, 00:15:33.967 "data_size": 65536 00:15:33.967 } 00:15:33.967 ] 00:15:33.967 }' 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.967 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.535 [2024-11-26 20:28:27.885763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.535 20:28:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:34.793 [2024-11-26 20:28:28.165072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:34.793 /dev/nbd0 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.793 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.794 1+0 records in 00:15:34.794 1+0 records out 00:15:34.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472364 s, 8.7 MB/s 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:34.794 20:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:40.066 65536+0 records in 00:15:40.066 65536+0 records out 00:15:40.066 33554432 bytes (34 MB, 32 MiB) copied, 4.64443 s, 7.2 MB/s 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.066 20:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.066 [2024-11-26 20:28:33.114655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.066 [2024-11-26 20:28:33.147135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.066 "name": "raid_bdev1", 00:15:40.066 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:40.066 "strip_size_kb": 0, 00:15:40.066 "state": "online", 00:15:40.066 "raid_level": "raid1", 00:15:40.066 "superblock": false, 00:15:40.066 "num_base_bdevs": 2, 00:15:40.066 "num_base_bdevs_discovered": 1, 00:15:40.066 "num_base_bdevs_operational": 1, 00:15:40.066 "base_bdevs_list": [ 00:15:40.066 { 00:15:40.066 "name": null, 00:15:40.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.066 "is_configured": false, 00:15:40.066 "data_offset": 0, 00:15:40.066 "data_size": 65536 00:15:40.066 }, 00:15:40.066 { 00:15:40.066 "name": "BaseBdev2", 00:15:40.066 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:40.066 "is_configured": true, 00:15:40.066 "data_offset": 0, 00:15:40.066 "data_size": 65536 00:15:40.066 } 00:15:40.066 ] 00:15:40.066 }' 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.066 [2024-11-26 20:28:33.558468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.066 [2024-11-26 20:28:33.578899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.066 20:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:40.066 [2024-11-26 20:28:33.581183] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.445 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.445 "name": "raid_bdev1", 00:15:41.445 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:41.445 "strip_size_kb": 0, 00:15:41.445 "state": "online", 00:15:41.445 "raid_level": "raid1", 00:15:41.445 "superblock": false, 00:15:41.445 "num_base_bdevs": 2, 00:15:41.445 "num_base_bdevs_discovered": 2, 00:15:41.445 "num_base_bdevs_operational": 2, 00:15:41.445 "process": { 00:15:41.445 "type": "rebuild", 00:15:41.445 "target": "spare", 00:15:41.445 "progress": { 00:15:41.445 "blocks": 20480, 00:15:41.445 "percent": 31 00:15:41.445 } 00:15:41.445 }, 00:15:41.445 "base_bdevs_list": [ 00:15:41.445 { 00:15:41.445 "name": "spare", 00:15:41.445 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:41.445 "is_configured": true, 00:15:41.445 "data_offset": 0, 00:15:41.445 "data_size": 65536 00:15:41.445 }, 00:15:41.445 { 00:15:41.446 "name": "BaseBdev2", 00:15:41.446 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:41.446 "is_configured": true, 00:15:41.446 "data_offset": 0, 00:15:41.446 "data_size": 65536 00:15:41.446 } 00:15:41.446 ] 00:15:41.446 }' 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.446 [2024-11-26 20:28:34.748954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.446 [2024-11-26 20:28:34.787384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.446 [2024-11-26 20:28:34.787632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.446 [2024-11-26 20:28:34.787677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.446 [2024-11-26 20:28:34.787706] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.446 "name": "raid_bdev1", 00:15:41.446 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:41.446 "strip_size_kb": 0, 00:15:41.446 "state": "online", 00:15:41.446 "raid_level": "raid1", 00:15:41.446 "superblock": false, 00:15:41.446 "num_base_bdevs": 2, 00:15:41.446 "num_base_bdevs_discovered": 1, 00:15:41.446 "num_base_bdevs_operational": 1, 00:15:41.446 "base_bdevs_list": [ 00:15:41.446 { 00:15:41.446 "name": null, 00:15:41.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.446 "is_configured": false, 00:15:41.446 "data_offset": 0, 00:15:41.446 "data_size": 65536 00:15:41.446 }, 00:15:41.446 { 00:15:41.446 "name": "BaseBdev2", 00:15:41.446 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:41.446 "is_configured": true, 00:15:41.446 "data_offset": 0, 00:15:41.446 "data_size": 65536 00:15:41.446 } 00:15:41.446 ] 00:15:41.446 }' 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.446 20:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.014 "name": "raid_bdev1", 00:15:42.014 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:42.014 "strip_size_kb": 0, 00:15:42.014 "state": "online", 00:15:42.014 "raid_level": "raid1", 00:15:42.014 "superblock": false, 00:15:42.014 "num_base_bdevs": 2, 00:15:42.014 "num_base_bdevs_discovered": 1, 00:15:42.014 "num_base_bdevs_operational": 1, 00:15:42.014 "base_bdevs_list": [ 00:15:42.014 { 00:15:42.014 "name": null, 00:15:42.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.014 "is_configured": false, 00:15:42.014 "data_offset": 0, 00:15:42.014 "data_size": 65536 00:15:42.014 }, 00:15:42.014 { 00:15:42.014 "name": "BaseBdev2", 00:15:42.014 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:42.014 "is_configured": true, 00:15:42.014 "data_offset": 0, 00:15:42.014 "data_size": 65536 00:15:42.014 } 00:15:42.014 ] 00:15:42.014 }' 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 [2024-11-26 20:28:35.406158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.014 [2024-11-26 20:28:35.424152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.014 20:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.014 [2024-11-26 20:28:35.426264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.951 "name": "raid_bdev1", 00:15:42.951 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:42.951 "strip_size_kb": 0, 00:15:42.951 "state": "online", 00:15:42.951 "raid_level": "raid1", 00:15:42.951 "superblock": false, 00:15:42.951 "num_base_bdevs": 2, 00:15:42.951 "num_base_bdevs_discovered": 2, 00:15:42.951 "num_base_bdevs_operational": 2, 00:15:42.951 "process": { 00:15:42.951 "type": "rebuild", 00:15:42.951 "target": "spare", 00:15:42.951 "progress": { 00:15:42.951 "blocks": 20480, 00:15:42.951 "percent": 31 00:15:42.951 } 00:15:42.951 }, 00:15:42.951 "base_bdevs_list": [ 00:15:42.951 { 00:15:42.951 "name": "spare", 00:15:42.951 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:42.951 "is_configured": true, 00:15:42.951 "data_offset": 0, 00:15:42.951 "data_size": 65536 00:15:42.951 }, 00:15:42.951 { 00:15:42.951 "name": "BaseBdev2", 00:15:42.951 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:42.951 "is_configured": true, 00:15:42.951 "data_offset": 0, 00:15:42.951 "data_size": 65536 00:15:42.951 } 00:15:42.951 ] 00:15:42.951 }' 00:15:42.951 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=389 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.210 "name": "raid_bdev1", 00:15:43.210 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:43.210 "strip_size_kb": 0, 00:15:43.210 "state": "online", 00:15:43.210 "raid_level": "raid1", 00:15:43.210 "superblock": false, 00:15:43.210 "num_base_bdevs": 2, 00:15:43.210 "num_base_bdevs_discovered": 2, 00:15:43.210 "num_base_bdevs_operational": 2, 00:15:43.210 "process": { 00:15:43.210 "type": "rebuild", 00:15:43.210 "target": "spare", 00:15:43.210 "progress": { 00:15:43.210 "blocks": 22528, 00:15:43.210 "percent": 34 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 "base_bdevs_list": [ 00:15:43.210 { 00:15:43.210 "name": "spare", 00:15:43.210 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:43.210 "is_configured": true, 00:15:43.210 "data_offset": 0, 00:15:43.210 "data_size": 65536 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "name": "BaseBdev2", 00:15:43.210 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:43.210 "is_configured": true, 00:15:43.210 "data_offset": 0, 00:15:43.210 "data_size": 65536 00:15:43.210 } 00:15:43.210 ] 00:15:43.210 }' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.210 20:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.587 "name": "raid_bdev1", 00:15:44.587 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:44.587 "strip_size_kb": 0, 00:15:44.587 "state": "online", 00:15:44.587 "raid_level": "raid1", 00:15:44.587 "superblock": false, 00:15:44.587 "num_base_bdevs": 2, 00:15:44.587 "num_base_bdevs_discovered": 2, 00:15:44.587 "num_base_bdevs_operational": 2, 00:15:44.587 "process": { 00:15:44.587 "type": "rebuild", 00:15:44.587 "target": "spare", 00:15:44.587 "progress": { 00:15:44.587 "blocks": 47104, 00:15:44.587 "percent": 71 00:15:44.587 } 00:15:44.587 }, 00:15:44.587 "base_bdevs_list": [ 00:15:44.587 { 00:15:44.587 "name": "spare", 00:15:44.587 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:44.587 "is_configured": true, 00:15:44.587 "data_offset": 0, 00:15:44.587 "data_size": 65536 00:15:44.587 }, 00:15:44.587 { 00:15:44.587 "name": "BaseBdev2", 00:15:44.587 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:44.587 "is_configured": true, 00:15:44.587 "data_offset": 0, 00:15:44.587 "data_size": 65536 00:15:44.587 } 00:15:44.587 ] 00:15:44.587 }' 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.587 20:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.156 [2024-11-26 20:28:38.642320] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:45.156 [2024-11-26 20:28:38.642416] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:45.156 [2024-11-26 20:28:38.642470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.415 "name": "raid_bdev1", 00:15:45.415 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:45.415 "strip_size_kb": 0, 00:15:45.415 "state": "online", 00:15:45.415 "raid_level": "raid1", 00:15:45.415 "superblock": false, 00:15:45.415 "num_base_bdevs": 2, 00:15:45.415 "num_base_bdevs_discovered": 2, 00:15:45.415 "num_base_bdevs_operational": 2, 00:15:45.415 "base_bdevs_list": [ 00:15:45.415 { 00:15:45.415 "name": "spare", 00:15:45.415 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:45.415 "is_configured": true, 00:15:45.415 "data_offset": 0, 00:15:45.415 "data_size": 65536 00:15:45.415 }, 00:15:45.415 { 00:15:45.415 "name": "BaseBdev2", 00:15:45.415 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:45.415 "is_configured": true, 00:15:45.415 "data_offset": 0, 00:15:45.415 "data_size": 65536 00:15:45.415 } 00:15:45.415 ] 00:15:45.415 }' 00:15:45.415 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.675 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:45.675 20:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.675 "name": "raid_bdev1", 00:15:45.675 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:45.675 "strip_size_kb": 0, 00:15:45.675 "state": "online", 00:15:45.675 "raid_level": "raid1", 00:15:45.675 "superblock": false, 00:15:45.675 "num_base_bdevs": 2, 00:15:45.675 "num_base_bdevs_discovered": 2, 00:15:45.675 "num_base_bdevs_operational": 2, 00:15:45.675 "base_bdevs_list": [ 00:15:45.675 { 00:15:45.675 "name": "spare", 00:15:45.675 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:45.675 "is_configured": true, 00:15:45.675 "data_offset": 0, 00:15:45.675 "data_size": 65536 00:15:45.675 }, 00:15:45.675 { 00:15:45.675 "name": "BaseBdev2", 00:15:45.675 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:45.675 "is_configured": true, 00:15:45.675 "data_offset": 0, 00:15:45.675 "data_size": 65536 00:15:45.675 } 00:15:45.675 ] 00:15:45.675 }' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.675 "name": "raid_bdev1", 00:15:45.675 "uuid": "6c276749-2f8f-4789-be66-a60d9950acd5", 00:15:45.675 "strip_size_kb": 0, 00:15:45.675 "state": "online", 00:15:45.675 "raid_level": "raid1", 00:15:45.675 "superblock": false, 00:15:45.675 "num_base_bdevs": 2, 00:15:45.675 "num_base_bdevs_discovered": 2, 00:15:45.675 "num_base_bdevs_operational": 2, 00:15:45.675 "base_bdevs_list": [ 00:15:45.675 { 00:15:45.675 "name": "spare", 00:15:45.675 "uuid": "af32abe5-169a-5d89-8c07-5353d6220627", 00:15:45.675 "is_configured": true, 00:15:45.675 "data_offset": 0, 00:15:45.675 "data_size": 65536 00:15:45.675 }, 00:15:45.675 { 00:15:45.675 "name": "BaseBdev2", 00:15:45.675 "uuid": "21650e85-602e-5ef8-8623-f17afbf74b57", 00:15:45.675 "is_configured": true, 00:15:45.675 "data_offset": 0, 00:15:45.675 "data_size": 65536 00:15:45.675 } 00:15:45.675 ] 00:15:45.675 }' 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.675 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.245 [2024-11-26 20:28:39.640100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.245 [2024-11-26 20:28:39.640260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.245 [2024-11-26 20:28:39.640403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.245 [2024-11-26 20:28:39.640521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.245 [2024-11-26 20:28:39.640577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.245 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:46.505 /dev/nbd0 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.505 1+0 records in 00:15:46.505 1+0 records out 00:15:46.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576213 s, 7.1 MB/s 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.505 20:28:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:46.764 /dev/nbd1 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.764 1+0 records in 00:15:46.764 1+0 records out 00:15:46.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045188 s, 9.1 MB/s 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.764 20:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.024 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.283 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75664 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75664 ']' 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75664 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.542 20:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75664 00:15:47.542 20:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.542 20:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.542 killing process with pid 75664 00:15:47.542 Received shutdown signal, test time was about 60.000000 seconds 00:15:47.542 00:15:47.542 Latency(us) 00:15:47.542 [2024-11-26T20:28:41.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.542 [2024-11-26T20:28:41.097Z] =================================================================================================================== 00:15:47.542 [2024-11-26T20:28:41.097Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:47.542 20:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75664' 00:15:47.542 20:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75664 00:15:47.542 [2024-11-26 20:28:41.019395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.542 20:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75664 00:15:48.111 [2024-11-26 20:28:41.359839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:49.490 00:15:49.490 real 0m16.391s 00:15:49.490 user 0m18.274s 00:15:49.490 sys 0m3.325s 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.490 ************************************ 00:15:49.490 END TEST raid_rebuild_test 00:15:49.490 ************************************ 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.490 20:28:42 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:49.490 20:28:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:49.490 20:28:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.490 20:28:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.490 ************************************ 00:15:49.490 START TEST raid_rebuild_test_sb 00:15:49.490 ************************************ 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76093 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76093 00:15:49.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76093 ']' 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.490 20:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:49.490 [2024-11-26 20:28:42.821062] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:15:49.490 [2024-11-26 20:28:42.821282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:49.490 Zero copy mechanism will not be used. 00:15:49.490 -allocations --file-prefix=spdk_pid76093 ] 00:15:49.490 [2024-11-26 20:28:43.001764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.769 [2024-11-26 20:28:43.143099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.026 [2024-11-26 20:28:43.382403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.026 [2024-11-26 20:28:43.382621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.286 BaseBdev1_malloc 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.286 [2024-11-26 20:28:43.746598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.286 [2024-11-26 20:28:43.746679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.286 [2024-11-26 20:28:43.746706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.286 [2024-11-26 20:28:43.746719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.286 [2024-11-26 20:28:43.749488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.286 [2024-11-26 20:28:43.749536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.286 BaseBdev1 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.286 BaseBdev2_malloc 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.286 [2024-11-26 20:28:43.810524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:50.286 [2024-11-26 20:28:43.810605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.286 [2024-11-26 20:28:43.810634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.286 [2024-11-26 20:28:43.810647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.286 [2024-11-26 20:28:43.813288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.286 [2024-11-26 20:28:43.813401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.286 BaseBdev2 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.286 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 spare_malloc 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 spare_delay 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 [2024-11-26 20:28:43.897023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.545 [2024-11-26 20:28:43.897102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.545 [2024-11-26 20:28:43.897128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:50.545 [2024-11-26 20:28:43.897139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.545 [2024-11-26 20:28:43.899818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.545 [2024-11-26 20:28:43.899862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.545 spare 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 [2024-11-26 20:28:43.909066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.545 [2024-11-26 20:28:43.911482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.545 [2024-11-26 20:28:43.911670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:50.545 [2024-11-26 20:28:43.911687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:50.545 [2024-11-26 20:28:43.911958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:50.545 [2024-11-26 20:28:43.912145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:50.545 [2024-11-26 20:28:43.912154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:50.545 [2024-11-26 20:28:43.912417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.545 "name": "raid_bdev1", 00:15:50.545 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:50.545 "strip_size_kb": 0, 00:15:50.545 "state": "online", 00:15:50.545 "raid_level": "raid1", 00:15:50.545 "superblock": true, 00:15:50.545 "num_base_bdevs": 2, 00:15:50.545 "num_base_bdevs_discovered": 2, 00:15:50.545 "num_base_bdevs_operational": 2, 00:15:50.545 "base_bdevs_list": [ 00:15:50.545 { 00:15:50.545 "name": "BaseBdev1", 00:15:50.545 "uuid": "1ce88227-77ba-5b62-a102-e5f63bf2da09", 00:15:50.545 "is_configured": true, 00:15:50.545 "data_offset": 2048, 00:15:50.545 "data_size": 63488 00:15:50.545 }, 00:15:50.545 { 00:15:50.545 "name": "BaseBdev2", 00:15:50.545 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:50.545 "is_configured": true, 00:15:50.545 "data_offset": 2048, 00:15:50.545 "data_size": 63488 00:15:50.545 } 00:15:50.545 ] 00:15:50.545 }' 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.545 20:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.113 [2024-11-26 20:28:44.392713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.113 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:51.372 [2024-11-26 20:28:44.679909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:51.372 /dev/nbd0 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.372 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.372 1+0 records in 00:15:51.373 1+0 records out 00:15:51.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618309 s, 6.6 MB/s 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:51.373 20:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:56.643 63488+0 records in 00:15:56.643 63488+0 records out 00:15:56.643 32505856 bytes (33 MB, 31 MiB) copied, 4.77858 s, 6.8 MB/s 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.643 [2024-11-26 20:28:49.745182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.643 [2024-11-26 20:28:49.785217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.643 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.644 "name": "raid_bdev1", 00:15:56.644 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:56.644 "strip_size_kb": 0, 00:15:56.644 "state": "online", 00:15:56.644 "raid_level": "raid1", 00:15:56.644 "superblock": true, 00:15:56.644 "num_base_bdevs": 2, 00:15:56.644 "num_base_bdevs_discovered": 1, 00:15:56.644 "num_base_bdevs_operational": 1, 00:15:56.644 "base_bdevs_list": [ 00:15:56.644 { 00:15:56.644 "name": null, 00:15:56.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.644 "is_configured": false, 00:15:56.644 "data_offset": 0, 00:15:56.644 "data_size": 63488 00:15:56.644 }, 00:15:56.644 { 00:15:56.644 "name": "BaseBdev2", 00:15:56.644 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:56.644 "is_configured": true, 00:15:56.644 "data_offset": 2048, 00:15:56.644 "data_size": 63488 00:15:56.644 } 00:15:56.644 ] 00:15:56.644 }' 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.644 20:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.903 20:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:56.903 20:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.903 20:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.904 [2024-11-26 20:28:50.220518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:56.904 [2024-11-26 20:28:50.239936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:56.904 20:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.904 [2024-11-26 20:28:50.242059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.904 20:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.840 "name": "raid_bdev1", 00:15:57.840 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:57.840 "strip_size_kb": 0, 00:15:57.840 "state": "online", 00:15:57.840 "raid_level": "raid1", 00:15:57.840 "superblock": true, 00:15:57.840 "num_base_bdevs": 2, 00:15:57.840 "num_base_bdevs_discovered": 2, 00:15:57.840 "num_base_bdevs_operational": 2, 00:15:57.840 "process": { 00:15:57.840 "type": "rebuild", 00:15:57.840 "target": "spare", 00:15:57.840 "progress": { 00:15:57.840 "blocks": 20480, 00:15:57.840 "percent": 32 00:15:57.840 } 00:15:57.840 }, 00:15:57.840 "base_bdevs_list": [ 00:15:57.840 { 00:15:57.840 "name": "spare", 00:15:57.840 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:15:57.840 "is_configured": true, 00:15:57.840 "data_offset": 2048, 00:15:57.840 "data_size": 63488 00:15:57.840 }, 00:15:57.840 { 00:15:57.840 "name": "BaseBdev2", 00:15:57.840 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:57.840 "is_configured": true, 00:15:57.840 "data_offset": 2048, 00:15:57.840 "data_size": 63488 00:15:57.840 } 00:15:57.840 ] 00:15:57.840 }' 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.840 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.100 [2024-11-26 20:28:51.436995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.100 [2024-11-26 20:28:51.447991] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.100 [2024-11-26 20:28:51.448061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.100 [2024-11-26 20:28:51.448078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.100 [2024-11-26 20:28:51.448087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.100 "name": "raid_bdev1", 00:15:58.100 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:58.100 "strip_size_kb": 0, 00:15:58.100 "state": "online", 00:15:58.100 "raid_level": "raid1", 00:15:58.100 "superblock": true, 00:15:58.100 "num_base_bdevs": 2, 00:15:58.100 "num_base_bdevs_discovered": 1, 00:15:58.100 "num_base_bdevs_operational": 1, 00:15:58.100 "base_bdevs_list": [ 00:15:58.100 { 00:15:58.100 "name": null, 00:15:58.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.100 "is_configured": false, 00:15:58.100 "data_offset": 0, 00:15:58.100 "data_size": 63488 00:15:58.100 }, 00:15:58.100 { 00:15:58.100 "name": "BaseBdev2", 00:15:58.100 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:58.100 "is_configured": true, 00:15:58.100 "data_offset": 2048, 00:15:58.100 "data_size": 63488 00:15:58.100 } 00:15:58.100 ] 00:15:58.100 }' 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.100 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.668 20:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.668 "name": "raid_bdev1", 00:15:58.668 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:58.668 "strip_size_kb": 0, 00:15:58.668 "state": "online", 00:15:58.668 "raid_level": "raid1", 00:15:58.668 "superblock": true, 00:15:58.668 "num_base_bdevs": 2, 00:15:58.668 "num_base_bdevs_discovered": 1, 00:15:58.668 "num_base_bdevs_operational": 1, 00:15:58.668 "base_bdevs_list": [ 00:15:58.668 { 00:15:58.668 "name": null, 00:15:58.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.668 "is_configured": false, 00:15:58.668 "data_offset": 0, 00:15:58.668 "data_size": 63488 00:15:58.668 }, 00:15:58.668 { 00:15:58.668 "name": "BaseBdev2", 00:15:58.668 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:58.668 "is_configured": true, 00:15:58.668 "data_offset": 2048, 00:15:58.668 "data_size": 63488 00:15:58.668 } 00:15:58.668 ] 00:15:58.668 }' 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.668 [2024-11-26 20:28:52.110527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.668 [2024-11-26 20:28:52.127800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.668 20:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:58.668 [2024-11-26 20:28:52.129883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.606 20:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.865 20:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.865 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.865 "name": "raid_bdev1", 00:15:59.865 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:59.865 "strip_size_kb": 0, 00:15:59.865 "state": "online", 00:15:59.865 "raid_level": "raid1", 00:15:59.865 "superblock": true, 00:15:59.866 "num_base_bdevs": 2, 00:15:59.866 "num_base_bdevs_discovered": 2, 00:15:59.866 "num_base_bdevs_operational": 2, 00:15:59.866 "process": { 00:15:59.866 "type": "rebuild", 00:15:59.866 "target": "spare", 00:15:59.866 "progress": { 00:15:59.866 "blocks": 20480, 00:15:59.866 "percent": 32 00:15:59.866 } 00:15:59.866 }, 00:15:59.866 "base_bdevs_list": [ 00:15:59.866 { 00:15:59.866 "name": "spare", 00:15:59.866 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:15:59.866 "is_configured": true, 00:15:59.866 "data_offset": 2048, 00:15:59.866 "data_size": 63488 00:15:59.866 }, 00:15:59.866 { 00:15:59.866 "name": "BaseBdev2", 00:15:59.866 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:59.866 "is_configured": true, 00:15:59.866 "data_offset": 2048, 00:15:59.866 "data_size": 63488 00:15:59.866 } 00:15:59.866 ] 00:15:59.866 }' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:59.866 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.866 "name": "raid_bdev1", 00:15:59.866 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:15:59.866 "strip_size_kb": 0, 00:15:59.866 "state": "online", 00:15:59.866 "raid_level": "raid1", 00:15:59.866 "superblock": true, 00:15:59.866 "num_base_bdevs": 2, 00:15:59.866 "num_base_bdevs_discovered": 2, 00:15:59.866 "num_base_bdevs_operational": 2, 00:15:59.866 "process": { 00:15:59.866 "type": "rebuild", 00:15:59.866 "target": "spare", 00:15:59.866 "progress": { 00:15:59.866 "blocks": 22528, 00:15:59.866 "percent": 35 00:15:59.866 } 00:15:59.866 }, 00:15:59.866 "base_bdevs_list": [ 00:15:59.866 { 00:15:59.866 "name": "spare", 00:15:59.866 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:15:59.866 "is_configured": true, 00:15:59.866 "data_offset": 2048, 00:15:59.866 "data_size": 63488 00:15:59.866 }, 00:15:59.866 { 00:15:59.866 "name": "BaseBdev2", 00:15:59.866 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:15:59.866 "is_configured": true, 00:15:59.866 "data_offset": 2048, 00:15:59.866 "data_size": 63488 00:15:59.866 } 00:15:59.866 ] 00:15:59.866 }' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.866 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.125 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.125 20:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.198 "name": "raid_bdev1", 00:16:01.198 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:01.198 "strip_size_kb": 0, 00:16:01.198 "state": "online", 00:16:01.198 "raid_level": "raid1", 00:16:01.198 "superblock": true, 00:16:01.198 "num_base_bdevs": 2, 00:16:01.198 "num_base_bdevs_discovered": 2, 00:16:01.198 "num_base_bdevs_operational": 2, 00:16:01.198 "process": { 00:16:01.198 "type": "rebuild", 00:16:01.198 "target": "spare", 00:16:01.198 "progress": { 00:16:01.198 "blocks": 47104, 00:16:01.198 "percent": 74 00:16:01.198 } 00:16:01.198 }, 00:16:01.198 "base_bdevs_list": [ 00:16:01.198 { 00:16:01.198 "name": "spare", 00:16:01.198 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:01.198 "is_configured": true, 00:16:01.198 "data_offset": 2048, 00:16:01.198 "data_size": 63488 00:16:01.198 }, 00:16:01.198 { 00:16:01.198 "name": "BaseBdev2", 00:16:01.198 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:01.198 "is_configured": true, 00:16:01.198 "data_offset": 2048, 00:16:01.198 "data_size": 63488 00:16:01.198 } 00:16:01.198 ] 00:16:01.198 }' 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.198 20:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.768 [2024-11-26 20:28:55.245382] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:01.768 [2024-11-26 20:28:55.245584] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:01.768 [2024-11-26 20:28:55.245730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.028 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.287 "name": "raid_bdev1", 00:16:02.287 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:02.287 "strip_size_kb": 0, 00:16:02.287 "state": "online", 00:16:02.287 "raid_level": "raid1", 00:16:02.287 "superblock": true, 00:16:02.287 "num_base_bdevs": 2, 00:16:02.287 "num_base_bdevs_discovered": 2, 00:16:02.287 "num_base_bdevs_operational": 2, 00:16:02.287 "base_bdevs_list": [ 00:16:02.287 { 00:16:02.287 "name": "spare", 00:16:02.287 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:02.287 "is_configured": true, 00:16:02.287 "data_offset": 2048, 00:16:02.287 "data_size": 63488 00:16:02.287 }, 00:16:02.287 { 00:16:02.287 "name": "BaseBdev2", 00:16:02.287 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:02.287 "is_configured": true, 00:16:02.287 "data_offset": 2048, 00:16:02.287 "data_size": 63488 00:16:02.287 } 00:16:02.287 ] 00:16:02.287 }' 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.287 "name": "raid_bdev1", 00:16:02.287 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:02.287 "strip_size_kb": 0, 00:16:02.287 "state": "online", 00:16:02.287 "raid_level": "raid1", 00:16:02.287 "superblock": true, 00:16:02.287 "num_base_bdevs": 2, 00:16:02.287 "num_base_bdevs_discovered": 2, 00:16:02.287 "num_base_bdevs_operational": 2, 00:16:02.287 "base_bdevs_list": [ 00:16:02.287 { 00:16:02.287 "name": "spare", 00:16:02.287 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:02.287 "is_configured": true, 00:16:02.287 "data_offset": 2048, 00:16:02.287 "data_size": 63488 00:16:02.287 }, 00:16:02.287 { 00:16:02.287 "name": "BaseBdev2", 00:16:02.287 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:02.287 "is_configured": true, 00:16:02.287 "data_offset": 2048, 00:16:02.287 "data_size": 63488 00:16:02.287 } 00:16:02.287 ] 00:16:02.287 }' 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.287 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.546 "name": "raid_bdev1", 00:16:02.546 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:02.546 "strip_size_kb": 0, 00:16:02.546 "state": "online", 00:16:02.546 "raid_level": "raid1", 00:16:02.546 "superblock": true, 00:16:02.546 "num_base_bdevs": 2, 00:16:02.546 "num_base_bdevs_discovered": 2, 00:16:02.546 "num_base_bdevs_operational": 2, 00:16:02.546 "base_bdevs_list": [ 00:16:02.546 { 00:16:02.546 "name": "spare", 00:16:02.546 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:02.546 "is_configured": true, 00:16:02.546 "data_offset": 2048, 00:16:02.546 "data_size": 63488 00:16:02.546 }, 00:16:02.546 { 00:16:02.546 "name": "BaseBdev2", 00:16:02.546 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:02.546 "is_configured": true, 00:16:02.546 "data_offset": 2048, 00:16:02.546 "data_size": 63488 00:16:02.546 } 00:16:02.546 ] 00:16:02.546 }' 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.546 20:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.805 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.805 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.805 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.805 [2024-11-26 20:28:56.280462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.805 [2024-11-26 20:28:56.280581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.805 [2024-11-26 20:28:56.280705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.805 [2024-11-26 20:28:56.280827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.805 [2024-11-26 20:28:56.280910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:02.805 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.805 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.805 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.806 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:03.065 /dev/nbd0 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.065 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.324 1+0 records in 00:16:03.325 1+0 records out 00:16:03.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459877 s, 8.9 MB/s 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.325 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:03.325 /dev/nbd1 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.585 1+0 records in 00:16:03.585 1+0 records out 00:16:03.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631419 s, 6.5 MB/s 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.585 20:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.585 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.844 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.103 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.363 [2024-11-26 20:28:57.666287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.363 [2024-11-26 20:28:57.666364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.363 [2024-11-26 20:28:57.666393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:04.363 [2024-11-26 20:28:57.666405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.363 [2024-11-26 20:28:57.668885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.363 [2024-11-26 20:28:57.669012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.363 [2024-11-26 20:28:57.669139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:04.363 [2024-11-26 20:28:57.669204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.363 [2024-11-26 20:28:57.669387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.363 spare 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.363 [2024-11-26 20:28:57.769310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:04.363 [2024-11-26 20:28:57.769376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.363 [2024-11-26 20:28:57.769759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:16:04.363 [2024-11-26 20:28:57.769997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:04.363 [2024-11-26 20:28:57.770010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:04.363 [2024-11-26 20:28:57.770300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.363 "name": "raid_bdev1", 00:16:04.363 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:04.363 "strip_size_kb": 0, 00:16:04.363 "state": "online", 00:16:04.363 "raid_level": "raid1", 00:16:04.363 "superblock": true, 00:16:04.363 "num_base_bdevs": 2, 00:16:04.363 "num_base_bdevs_discovered": 2, 00:16:04.363 "num_base_bdevs_operational": 2, 00:16:04.363 "base_bdevs_list": [ 00:16:04.363 { 00:16:04.363 "name": "spare", 00:16:04.363 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:04.363 "is_configured": true, 00:16:04.363 "data_offset": 2048, 00:16:04.363 "data_size": 63488 00:16:04.363 }, 00:16:04.363 { 00:16:04.363 "name": "BaseBdev2", 00:16:04.363 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:04.363 "is_configured": true, 00:16:04.363 "data_offset": 2048, 00:16:04.363 "data_size": 63488 00:16:04.363 } 00:16:04.363 ] 00:16:04.363 }' 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.363 20:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.931 "name": "raid_bdev1", 00:16:04.931 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:04.931 "strip_size_kb": 0, 00:16:04.931 "state": "online", 00:16:04.931 "raid_level": "raid1", 00:16:04.931 "superblock": true, 00:16:04.931 "num_base_bdevs": 2, 00:16:04.931 "num_base_bdevs_discovered": 2, 00:16:04.931 "num_base_bdevs_operational": 2, 00:16:04.931 "base_bdevs_list": [ 00:16:04.931 { 00:16:04.931 "name": "spare", 00:16:04.931 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:04.931 "is_configured": true, 00:16:04.931 "data_offset": 2048, 00:16:04.931 "data_size": 63488 00:16:04.931 }, 00:16:04.931 { 00:16:04.931 "name": "BaseBdev2", 00:16:04.931 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:04.931 "is_configured": true, 00:16:04.931 "data_offset": 2048, 00:16:04.931 "data_size": 63488 00:16:04.931 } 00:16:04.931 ] 00:16:04.931 }' 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:04.931 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.190 [2024-11-26 20:28:58.489017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.190 "name": "raid_bdev1", 00:16:05.190 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:05.190 "strip_size_kb": 0, 00:16:05.190 "state": "online", 00:16:05.190 "raid_level": "raid1", 00:16:05.190 "superblock": true, 00:16:05.190 "num_base_bdevs": 2, 00:16:05.190 "num_base_bdevs_discovered": 1, 00:16:05.190 "num_base_bdevs_operational": 1, 00:16:05.190 "base_bdevs_list": [ 00:16:05.190 { 00:16:05.190 "name": null, 00:16:05.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.190 "is_configured": false, 00:16:05.190 "data_offset": 0, 00:16:05.190 "data_size": 63488 00:16:05.190 }, 00:16:05.190 { 00:16:05.190 "name": "BaseBdev2", 00:16:05.190 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:05.190 "is_configured": true, 00:16:05.190 "data_offset": 2048, 00:16:05.190 "data_size": 63488 00:16:05.190 } 00:16:05.190 ] 00:16:05.190 }' 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.190 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.450 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 [2024-11-26 20:28:58.944377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.450 [2024-11-26 20:28:58.944709] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.450 [2024-11-26 20:28:58.944780] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:05.450 [2024-11-26 20:28:58.945051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.450 [2024-11-26 20:28:58.962123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:05.450 20:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 20:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:05.450 [2024-11-26 20:28:58.964338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.829 20:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.829 "name": "raid_bdev1", 00:16:06.829 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:06.829 "strip_size_kb": 0, 00:16:06.829 "state": "online", 00:16:06.829 "raid_level": "raid1", 00:16:06.829 "superblock": true, 00:16:06.829 "num_base_bdevs": 2, 00:16:06.829 "num_base_bdevs_discovered": 2, 00:16:06.829 "num_base_bdevs_operational": 2, 00:16:06.829 "process": { 00:16:06.829 "type": "rebuild", 00:16:06.829 "target": "spare", 00:16:06.829 "progress": { 00:16:06.829 "blocks": 20480, 00:16:06.829 "percent": 32 00:16:06.829 } 00:16:06.829 }, 00:16:06.829 "base_bdevs_list": [ 00:16:06.829 { 00:16:06.829 "name": "spare", 00:16:06.829 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:06.829 "is_configured": true, 00:16:06.829 "data_offset": 2048, 00:16:06.829 "data_size": 63488 00:16:06.829 }, 00:16:06.829 { 00:16:06.829 "name": "BaseBdev2", 00:16:06.829 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:06.829 "is_configured": true, 00:16:06.829 "data_offset": 2048, 00:16:06.829 "data_size": 63488 00:16:06.829 } 00:16:06.829 ] 00:16:06.829 }' 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.829 [2024-11-26 20:29:00.127474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.829 [2024-11-26 20:29:00.170420] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.829 [2024-11-26 20:29:00.170512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.829 [2024-11-26 20:29:00.170528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.829 [2024-11-26 20:29:00.170538] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.829 "name": "raid_bdev1", 00:16:06.829 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:06.829 "strip_size_kb": 0, 00:16:06.829 "state": "online", 00:16:06.829 "raid_level": "raid1", 00:16:06.829 "superblock": true, 00:16:06.829 "num_base_bdevs": 2, 00:16:06.829 "num_base_bdevs_discovered": 1, 00:16:06.829 "num_base_bdevs_operational": 1, 00:16:06.829 "base_bdevs_list": [ 00:16:06.829 { 00:16:06.829 "name": null, 00:16:06.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.829 "is_configured": false, 00:16:06.829 "data_offset": 0, 00:16:06.829 "data_size": 63488 00:16:06.829 }, 00:16:06.829 { 00:16:06.829 "name": "BaseBdev2", 00:16:06.829 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:06.829 "is_configured": true, 00:16:06.829 "data_offset": 2048, 00:16:06.829 "data_size": 63488 00:16:06.829 } 00:16:06.829 ] 00:16:06.829 }' 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.829 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.402 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.402 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.402 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.402 [2024-11-26 20:29:00.686787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.402 [2024-11-26 20:29:00.686952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.402 [2024-11-26 20:29:00.687005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:07.402 [2024-11-26 20:29:00.687037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.402 [2024-11-26 20:29:00.687568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.402 [2024-11-26 20:29:00.687636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.402 [2024-11-26 20:29:00.687769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.402 [2024-11-26 20:29:00.687813] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.402 [2024-11-26 20:29:00.687853] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.402 [2024-11-26 20:29:00.687947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.402 [2024-11-26 20:29:00.704915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:07.402 spare 00:16:07.402 20:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.402 20:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:07.402 [2024-11-26 20:29:00.707038] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.360 "name": "raid_bdev1", 00:16:08.360 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:08.360 "strip_size_kb": 0, 00:16:08.360 "state": "online", 00:16:08.360 "raid_level": "raid1", 00:16:08.360 "superblock": true, 00:16:08.360 "num_base_bdevs": 2, 00:16:08.360 "num_base_bdevs_discovered": 2, 00:16:08.360 "num_base_bdevs_operational": 2, 00:16:08.360 "process": { 00:16:08.360 "type": "rebuild", 00:16:08.360 "target": "spare", 00:16:08.360 "progress": { 00:16:08.360 "blocks": 20480, 00:16:08.360 "percent": 32 00:16:08.360 } 00:16:08.360 }, 00:16:08.360 "base_bdevs_list": [ 00:16:08.360 { 00:16:08.360 "name": "spare", 00:16:08.360 "uuid": "38f76ead-7af5-5e75-a094-07647d98cc2d", 00:16:08.360 "is_configured": true, 00:16:08.360 "data_offset": 2048, 00:16:08.360 "data_size": 63488 00:16:08.360 }, 00:16:08.360 { 00:16:08.360 "name": "BaseBdev2", 00:16:08.360 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:08.360 "is_configured": true, 00:16:08.360 "data_offset": 2048, 00:16:08.360 "data_size": 63488 00:16:08.360 } 00:16:08.360 ] 00:16:08.360 }' 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.360 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 [2024-11-26 20:29:01.858715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.360 [2024-11-26 20:29:01.912994] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.360 [2024-11-26 20:29:01.913083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.360 [2024-11-26 20:29:01.913105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.360 [2024-11-26 20:29:01.913114] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.619 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.619 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.620 20:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.620 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.620 "name": "raid_bdev1", 00:16:08.620 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:08.620 "strip_size_kb": 0, 00:16:08.620 "state": "online", 00:16:08.620 "raid_level": "raid1", 00:16:08.620 "superblock": true, 00:16:08.620 "num_base_bdevs": 2, 00:16:08.620 "num_base_bdevs_discovered": 1, 00:16:08.620 "num_base_bdevs_operational": 1, 00:16:08.620 "base_bdevs_list": [ 00:16:08.620 { 00:16:08.620 "name": null, 00:16:08.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.620 "is_configured": false, 00:16:08.620 "data_offset": 0, 00:16:08.620 "data_size": 63488 00:16:08.620 }, 00:16:08.620 { 00:16:08.620 "name": "BaseBdev2", 00:16:08.620 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:08.620 "is_configured": true, 00:16:08.620 "data_offset": 2048, 00:16:08.620 "data_size": 63488 00:16:08.620 } 00:16:08.620 ] 00:16:08.620 }' 00:16:08.620 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.620 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.879 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.138 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.138 "name": "raid_bdev1", 00:16:09.138 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:09.138 "strip_size_kb": 0, 00:16:09.138 "state": "online", 00:16:09.138 "raid_level": "raid1", 00:16:09.138 "superblock": true, 00:16:09.138 "num_base_bdevs": 2, 00:16:09.138 "num_base_bdevs_discovered": 1, 00:16:09.138 "num_base_bdevs_operational": 1, 00:16:09.138 "base_bdevs_list": [ 00:16:09.138 { 00:16:09.138 "name": null, 00:16:09.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.138 "is_configured": false, 00:16:09.138 "data_offset": 0, 00:16:09.138 "data_size": 63488 00:16:09.138 }, 00:16:09.138 { 00:16:09.138 "name": "BaseBdev2", 00:16:09.138 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:09.138 "is_configured": true, 00:16:09.138 "data_offset": 2048, 00:16:09.138 "data_size": 63488 00:16:09.138 } 00:16:09.138 ] 00:16:09.138 }' 00:16:09.138 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.138 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.138 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.138 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.138 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.139 [2024-11-26 20:29:02.557070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.139 [2024-11-26 20:29:02.557225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.139 [2024-11-26 20:29:02.557297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:09.139 [2024-11-26 20:29:02.557385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.139 [2024-11-26 20:29:02.557907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.139 [2024-11-26 20:29:02.557978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.139 [2024-11-26 20:29:02.558107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:09.139 [2024-11-26 20:29:02.558155] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.139 [2024-11-26 20:29:02.558204] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:09.139 [2024-11-26 20:29:02.558256] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:09.139 BaseBdev1 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.139 20:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.076 "name": "raid_bdev1", 00:16:10.076 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:10.076 "strip_size_kb": 0, 00:16:10.076 "state": "online", 00:16:10.076 "raid_level": "raid1", 00:16:10.076 "superblock": true, 00:16:10.076 "num_base_bdevs": 2, 00:16:10.076 "num_base_bdevs_discovered": 1, 00:16:10.076 "num_base_bdevs_operational": 1, 00:16:10.076 "base_bdevs_list": [ 00:16:10.076 { 00:16:10.076 "name": null, 00:16:10.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.076 "is_configured": false, 00:16:10.076 "data_offset": 0, 00:16:10.076 "data_size": 63488 00:16:10.076 }, 00:16:10.076 { 00:16:10.076 "name": "BaseBdev2", 00:16:10.076 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:10.076 "is_configured": true, 00:16:10.076 "data_offset": 2048, 00:16:10.076 "data_size": 63488 00:16:10.076 } 00:16:10.076 ] 00:16:10.076 }' 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.076 20:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.646 "name": "raid_bdev1", 00:16:10.646 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:10.646 "strip_size_kb": 0, 00:16:10.646 "state": "online", 00:16:10.646 "raid_level": "raid1", 00:16:10.646 "superblock": true, 00:16:10.646 "num_base_bdevs": 2, 00:16:10.646 "num_base_bdevs_discovered": 1, 00:16:10.646 "num_base_bdevs_operational": 1, 00:16:10.646 "base_bdevs_list": [ 00:16:10.646 { 00:16:10.646 "name": null, 00:16:10.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.646 "is_configured": false, 00:16:10.646 "data_offset": 0, 00:16:10.646 "data_size": 63488 00:16:10.646 }, 00:16:10.646 { 00:16:10.646 "name": "BaseBdev2", 00:16:10.646 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:10.646 "is_configured": true, 00:16:10.646 "data_offset": 2048, 00:16:10.646 "data_size": 63488 00:16:10.646 } 00:16:10.646 ] 00:16:10.646 }' 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.646 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.646 [2024-11-26 20:29:04.182773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.647 [2024-11-26 20:29:04.183022] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.647 [2024-11-26 20:29:04.183098] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:10.647 request: 00:16:10.647 { 00:16:10.647 "base_bdev": "BaseBdev1", 00:16:10.647 "raid_bdev": "raid_bdev1", 00:16:10.647 "method": "bdev_raid_add_base_bdev", 00:16:10.647 "req_id": 1 00:16:10.647 } 00:16:10.647 Got JSON-RPC error response 00:16:10.647 response: 00:16:10.647 { 00:16:10.647 "code": -22, 00:16:10.647 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:10.647 } 00:16:10.647 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:10.647 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:10.647 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.647 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.647 20:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.647 20:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.026 "name": "raid_bdev1", 00:16:12.026 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:12.026 "strip_size_kb": 0, 00:16:12.026 "state": "online", 00:16:12.026 "raid_level": "raid1", 00:16:12.026 "superblock": true, 00:16:12.026 "num_base_bdevs": 2, 00:16:12.026 "num_base_bdevs_discovered": 1, 00:16:12.026 "num_base_bdevs_operational": 1, 00:16:12.026 "base_bdevs_list": [ 00:16:12.026 { 00:16:12.026 "name": null, 00:16:12.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.026 "is_configured": false, 00:16:12.026 "data_offset": 0, 00:16:12.026 "data_size": 63488 00:16:12.026 }, 00:16:12.026 { 00:16:12.026 "name": "BaseBdev2", 00:16:12.026 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:12.026 "is_configured": true, 00:16:12.026 "data_offset": 2048, 00:16:12.026 "data_size": 63488 00:16:12.026 } 00:16:12.026 ] 00:16:12.026 }' 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.026 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.285 "name": "raid_bdev1", 00:16:12.285 "uuid": "53a35fa5-a0af-4f12-a40f-3072b7524160", 00:16:12.285 "strip_size_kb": 0, 00:16:12.285 "state": "online", 00:16:12.285 "raid_level": "raid1", 00:16:12.285 "superblock": true, 00:16:12.285 "num_base_bdevs": 2, 00:16:12.285 "num_base_bdevs_discovered": 1, 00:16:12.285 "num_base_bdevs_operational": 1, 00:16:12.285 "base_bdevs_list": [ 00:16:12.285 { 00:16:12.285 "name": null, 00:16:12.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.285 "is_configured": false, 00:16:12.285 "data_offset": 0, 00:16:12.285 "data_size": 63488 00:16:12.285 }, 00:16:12.285 { 00:16:12.285 "name": "BaseBdev2", 00:16:12.285 "uuid": "1d77f237-39bd-5084-ba3a-6b629287fb58", 00:16:12.285 "is_configured": true, 00:16:12.285 "data_offset": 2048, 00:16:12.285 "data_size": 63488 00:16:12.285 } 00:16:12.285 ] 00:16:12.285 }' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76093 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76093 ']' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76093 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76093 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.285 killing process with pid 76093 00:16:12.285 Received shutdown signal, test time was about 60.000000 seconds 00:16:12.285 00:16:12.285 Latency(us) 00:16:12.285 [2024-11-26T20:29:05.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.285 [2024-11-26T20:29:05.840Z] =================================================================================================================== 00:16:12.285 [2024-11-26T20:29:05.840Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76093' 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76093 00:16:12.285 [2024-11-26 20:29:05.810087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.285 20:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76093 00:16:12.285 [2024-11-26 20:29:05.810235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.285 [2024-11-26 20:29:05.810307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.285 [2024-11-26 20:29:05.810323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:12.853 [2024-11-26 20:29:06.145845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.233 00:16:14.233 real 0m24.647s 00:16:14.233 user 0m29.604s 00:16:14.233 sys 0m4.293s 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.233 ************************************ 00:16:14.233 END TEST raid_rebuild_test_sb 00:16:14.233 ************************************ 00:16:14.233 20:29:07 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:14.233 20:29:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:14.233 20:29:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.233 20:29:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.233 ************************************ 00:16:14.233 START TEST raid_rebuild_test_io 00:16:14.233 ************************************ 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:14.233 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76842 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76842 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76842 ']' 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.234 20:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.234 [2024-11-26 20:29:07.534689] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:16:14.234 [2024-11-26 20:29:07.534906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.234 Zero copy mechanism will not be used. 00:16:14.234 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76842 ] 00:16:14.234 [2024-11-26 20:29:07.709129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.494 [2024-11-26 20:29:07.832102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.494 [2024-11-26 20:29:08.043172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.494 [2024-11-26 20:29:08.043347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.063 BaseBdev1_malloc 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.063 [2024-11-26 20:29:08.463406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.063 [2024-11-26 20:29:08.463576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.063 [2024-11-26 20:29:08.463620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.063 [2024-11-26 20:29:08.463663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.063 [2024-11-26 20:29:08.465911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.063 [2024-11-26 20:29:08.466007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.063 BaseBdev1 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.063 BaseBdev2_malloc 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.063 [2024-11-26 20:29:08.522327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:15.063 [2024-11-26 20:29:08.522490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.063 [2024-11-26 20:29:08.522539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.063 [2024-11-26 20:29:08.522581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.063 [2024-11-26 20:29:08.524902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.063 [2024-11-26 20:29:08.524985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.063 BaseBdev2 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.063 spare_malloc 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.063 spare_delay 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.063 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.321 [2024-11-26 20:29:08.618625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.321 [2024-11-26 20:29:08.618777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.321 [2024-11-26 20:29:08.618817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:15.321 [2024-11-26 20:29:08.618849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.321 [2024-11-26 20:29:08.621136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.321 [2024-11-26 20:29:08.621220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.321 spare 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.321 [2024-11-26 20:29:08.630659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.321 [2024-11-26 20:29:08.632479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.321 [2024-11-26 20:29:08.632610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.321 [2024-11-26 20:29:08.632644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:15.321 [2024-11-26 20:29:08.632924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:15.321 [2024-11-26 20:29:08.633120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.321 [2024-11-26 20:29:08.633163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:15.321 [2024-11-26 20:29:08.633355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.321 "name": "raid_bdev1", 00:16:15.321 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:15.321 "strip_size_kb": 0, 00:16:15.321 "state": "online", 00:16:15.321 "raid_level": "raid1", 00:16:15.321 "superblock": false, 00:16:15.321 "num_base_bdevs": 2, 00:16:15.321 "num_base_bdevs_discovered": 2, 00:16:15.321 "num_base_bdevs_operational": 2, 00:16:15.321 "base_bdevs_list": [ 00:16:15.321 { 00:16:15.321 "name": "BaseBdev1", 00:16:15.321 "uuid": "462e197a-f148-5257-857b-6865318e5a88", 00:16:15.321 "is_configured": true, 00:16:15.321 "data_offset": 0, 00:16:15.321 "data_size": 65536 00:16:15.321 }, 00:16:15.321 { 00:16:15.321 "name": "BaseBdev2", 00:16:15.321 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:15.321 "is_configured": true, 00:16:15.321 "data_offset": 0, 00:16:15.321 "data_size": 65536 00:16:15.321 } 00:16:15.321 ] 00:16:15.321 }' 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.321 20:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.580 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.580 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.580 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.580 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:15.580 [2024-11-26 20:29:09.102296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.580 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.839 [2024-11-26 20:29:09.201719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.839 "name": "raid_bdev1", 00:16:15.839 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:15.839 "strip_size_kb": 0, 00:16:15.839 "state": "online", 00:16:15.839 "raid_level": "raid1", 00:16:15.839 "superblock": false, 00:16:15.839 "num_base_bdevs": 2, 00:16:15.839 "num_base_bdevs_discovered": 1, 00:16:15.839 "num_base_bdevs_operational": 1, 00:16:15.839 "base_bdevs_list": [ 00:16:15.839 { 00:16:15.839 "name": null, 00:16:15.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.839 "is_configured": false, 00:16:15.839 "data_offset": 0, 00:16:15.839 "data_size": 65536 00:16:15.839 }, 00:16:15.839 { 00:16:15.839 "name": "BaseBdev2", 00:16:15.839 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:15.839 "is_configured": true, 00:16:15.839 "data_offset": 0, 00:16:15.839 "data_size": 65536 00:16:15.839 } 00:16:15.839 ] 00:16:15.839 }' 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.839 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.839 [2024-11-26 20:29:09.305905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:15.839 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:15.839 Zero copy mechanism will not be used. 00:16:15.839 Running I/O for 60 seconds... 00:16:16.405 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:16.405 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.405 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.405 [2024-11-26 20:29:09.686713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.405 20:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.405 20:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:16.405 [2024-11-26 20:29:09.752650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:16.405 [2024-11-26 20:29:09.754821] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.405 [2024-11-26 20:29:09.865932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:16.405 [2024-11-26 20:29:09.866602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:16.664 [2024-11-26 20:29:09.990014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:16.664 [2024-11-26 20:29:09.990417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:16.923 159.00 IOPS, 477.00 MiB/s [2024-11-26T20:29:10.478Z] [2024-11-26 20:29:10.324786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:16.923 [2024-11-26 20:29:10.325378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:17.182 [2024-11-26 20:29:10.539574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:17.182 [2024-11-26 20:29:10.539918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.442 "name": "raid_bdev1", 00:16:17.442 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:17.442 "strip_size_kb": 0, 00:16:17.442 "state": "online", 00:16:17.442 "raid_level": "raid1", 00:16:17.442 "superblock": false, 00:16:17.442 "num_base_bdevs": 2, 00:16:17.442 "num_base_bdevs_discovered": 2, 00:16:17.442 "num_base_bdevs_operational": 2, 00:16:17.442 "process": { 00:16:17.442 "type": "rebuild", 00:16:17.442 "target": "spare", 00:16:17.442 "progress": { 00:16:17.442 "blocks": 12288, 00:16:17.442 "percent": 18 00:16:17.442 } 00:16:17.442 }, 00:16:17.442 "base_bdevs_list": [ 00:16:17.442 { 00:16:17.442 "name": "spare", 00:16:17.442 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:17.442 "is_configured": true, 00:16:17.442 "data_offset": 0, 00:16:17.442 "data_size": 65536 00:16:17.442 }, 00:16:17.442 { 00:16:17.442 "name": "BaseBdev2", 00:16:17.442 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:17.442 "is_configured": true, 00:16:17.442 "data_offset": 0, 00:16:17.442 "data_size": 65536 00:16:17.442 } 00:16:17.442 ] 00:16:17.442 }' 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.442 [2024-11-26 20:29:10.860837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.442 20:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.442 [2024-11-26 20:29:10.905649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.442 [2024-11-26 20:29:10.971954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:17.702 [2024-11-26 20:29:11.073644] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.702 [2024-11-26 20:29:11.076762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.702 [2024-11-26 20:29:11.076819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.702 [2024-11-26 20:29:11.076839] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.702 [2024-11-26 20:29:11.120942] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.702 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.703 "name": "raid_bdev1", 00:16:17.703 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:17.703 "strip_size_kb": 0, 00:16:17.703 "state": "online", 00:16:17.703 "raid_level": "raid1", 00:16:17.703 "superblock": false, 00:16:17.703 "num_base_bdevs": 2, 00:16:17.703 "num_base_bdevs_discovered": 1, 00:16:17.703 "num_base_bdevs_operational": 1, 00:16:17.703 "base_bdevs_list": [ 00:16:17.703 { 00:16:17.703 "name": null, 00:16:17.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.703 "is_configured": false, 00:16:17.703 "data_offset": 0, 00:16:17.703 "data_size": 65536 00:16:17.703 }, 00:16:17.703 { 00:16:17.703 "name": "BaseBdev2", 00:16:17.703 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:17.703 "is_configured": true, 00:16:17.703 "data_offset": 0, 00:16:17.703 "data_size": 65536 00:16:17.703 } 00:16:17.703 ] 00:16:17.703 }' 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.703 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.222 128.00 IOPS, 384.00 MiB/s [2024-11-26T20:29:11.777Z] 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.222 "name": "raid_bdev1", 00:16:18.222 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:18.222 "strip_size_kb": 0, 00:16:18.222 "state": "online", 00:16:18.222 "raid_level": "raid1", 00:16:18.222 "superblock": false, 00:16:18.222 "num_base_bdevs": 2, 00:16:18.222 "num_base_bdevs_discovered": 1, 00:16:18.222 "num_base_bdevs_operational": 1, 00:16:18.222 "base_bdevs_list": [ 00:16:18.222 { 00:16:18.222 "name": null, 00:16:18.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.222 "is_configured": false, 00:16:18.222 "data_offset": 0, 00:16:18.222 "data_size": 65536 00:16:18.222 }, 00:16:18.222 { 00:16:18.222 "name": "BaseBdev2", 00:16:18.222 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:18.222 "is_configured": true, 00:16:18.222 "data_offset": 0, 00:16:18.222 "data_size": 65536 00:16:18.222 } 00:16:18.222 ] 00:16:18.222 }' 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.222 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.222 [2024-11-26 20:29:11.751051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.481 20:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.481 20:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:18.481 [2024-11-26 20:29:11.834879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:18.481 [2024-11-26 20:29:11.836983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.481 [2024-11-26 20:29:11.965791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:18.481 [2024-11-26 20:29:11.966494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:18.741 [2024-11-26 20:29:12.181931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:19.000 160.33 IOPS, 481.00 MiB/s [2024-11-26T20:29:12.555Z] [2024-11-26 20:29:12.468263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:19.259 [2024-11-26 20:29:12.685758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:19.259 [2024-11-26 20:29:12.686139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.519 "name": "raid_bdev1", 00:16:19.519 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:19.519 "strip_size_kb": 0, 00:16:19.519 "state": "online", 00:16:19.519 "raid_level": "raid1", 00:16:19.519 "superblock": false, 00:16:19.519 "num_base_bdevs": 2, 00:16:19.519 "num_base_bdevs_discovered": 2, 00:16:19.519 "num_base_bdevs_operational": 2, 00:16:19.519 "process": { 00:16:19.519 "type": "rebuild", 00:16:19.519 "target": "spare", 00:16:19.519 "progress": { 00:16:19.519 "blocks": 12288, 00:16:19.519 "percent": 18 00:16:19.519 } 00:16:19.519 }, 00:16:19.519 "base_bdevs_list": [ 00:16:19.519 { 00:16:19.519 "name": "spare", 00:16:19.519 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:19.519 "is_configured": true, 00:16:19.519 "data_offset": 0, 00:16:19.519 "data_size": 65536 00:16:19.519 }, 00:16:19.519 { 00:16:19.519 "name": "BaseBdev2", 00:16:19.519 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:19.519 "is_configured": true, 00:16:19.519 "data_offset": 0, 00:16:19.519 "data_size": 65536 00:16:19.519 } 00:16:19.519 ] 00:16:19.519 }' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.519 [2024-11-26 20:29:12.923634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=425 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.519 20:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.519 20:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.519 "name": "raid_bdev1", 00:16:19.519 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:19.519 "strip_size_kb": 0, 00:16:19.519 "state": "online", 00:16:19.519 "raid_level": "raid1", 00:16:19.520 "superblock": false, 00:16:19.520 "num_base_bdevs": 2, 00:16:19.520 "num_base_bdevs_discovered": 2, 00:16:19.520 "num_base_bdevs_operational": 2, 00:16:19.520 "process": { 00:16:19.520 "type": "rebuild", 00:16:19.520 "target": "spare", 00:16:19.520 "progress": { 00:16:19.520 "blocks": 14336, 00:16:19.520 "percent": 21 00:16:19.520 } 00:16:19.520 }, 00:16:19.520 "base_bdevs_list": [ 00:16:19.520 { 00:16:19.520 "name": "spare", 00:16:19.520 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:19.520 "is_configured": true, 00:16:19.520 "data_offset": 0, 00:16:19.520 "data_size": 65536 00:16:19.520 }, 00:16:19.520 { 00:16:19.520 "name": "BaseBdev2", 00:16:19.520 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:19.520 "is_configured": true, 00:16:19.520 "data_offset": 0, 00:16:19.520 "data_size": 65536 00:16:19.520 } 00:16:19.520 ] 00:16:19.520 }' 00:16:19.520 20:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.779 20:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.779 20:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.779 20:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.779 20:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.038 133.50 IOPS, 400.50 MiB/s [2024-11-26T20:29:13.593Z] [2024-11-26 20:29:13.396194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:20.038 [2024-11-26 20:29:13.396973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:20.605 [2024-11-26 20:29:13.861756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:20.605 [2024-11-26 20:29:14.075828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:20.605 [2024-11-26 20:29:14.076297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.605 20:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.864 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.864 "name": "raid_bdev1", 00:16:20.864 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:20.864 "strip_size_kb": 0, 00:16:20.864 "state": "online", 00:16:20.864 "raid_level": "raid1", 00:16:20.864 "superblock": false, 00:16:20.864 "num_base_bdevs": 2, 00:16:20.864 "num_base_bdevs_discovered": 2, 00:16:20.864 "num_base_bdevs_operational": 2, 00:16:20.864 "process": { 00:16:20.864 "type": "rebuild", 00:16:20.864 "target": "spare", 00:16:20.864 "progress": { 00:16:20.864 "blocks": 28672, 00:16:20.864 "percent": 43 00:16:20.864 } 00:16:20.864 }, 00:16:20.864 "base_bdevs_list": [ 00:16:20.864 { 00:16:20.864 "name": "spare", 00:16:20.864 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:20.864 "is_configured": true, 00:16:20.864 "data_offset": 0, 00:16:20.864 "data_size": 65536 00:16:20.864 }, 00:16:20.864 { 00:16:20.864 "name": "BaseBdev2", 00:16:20.864 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:20.864 "is_configured": true, 00:16:20.864 "data_offset": 0, 00:16:20.864 "data_size": 65536 00:16:20.864 } 00:16:20.864 ] 00:16:20.864 }' 00:16:20.864 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.864 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.864 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.864 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.864 20:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.121 115.60 IOPS, 346.80 MiB/s [2024-11-26T20:29:14.677Z] [2024-11-26 20:29:14.540937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:21.379 [2024-11-26 20:29:14.758440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:21.636 [2024-11-26 20:29:14.981057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:21.636 [2024-11-26 20:29:14.981449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.894 107.33 IOPS, 322.00 MiB/s [2024-11-26T20:29:15.449Z] 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.894 "name": "raid_bdev1", 00:16:21.894 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:21.894 "strip_size_kb": 0, 00:16:21.894 "state": "online", 00:16:21.894 "raid_level": "raid1", 00:16:21.894 "superblock": false, 00:16:21.894 "num_base_bdevs": 2, 00:16:21.894 "num_base_bdevs_discovered": 2, 00:16:21.894 "num_base_bdevs_operational": 2, 00:16:21.894 "process": { 00:16:21.894 "type": "rebuild", 00:16:21.894 "target": "spare", 00:16:21.894 "progress": { 00:16:21.894 "blocks": 43008, 00:16:21.894 "percent": 65 00:16:21.894 } 00:16:21.894 }, 00:16:21.894 "base_bdevs_list": [ 00:16:21.894 { 00:16:21.894 "name": "spare", 00:16:21.894 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:21.894 "is_configured": true, 00:16:21.894 "data_offset": 0, 00:16:21.894 "data_size": 65536 00:16:21.894 }, 00:16:21.894 { 00:16:21.894 "name": "BaseBdev2", 00:16:21.894 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:21.894 "is_configured": true, 00:16:21.894 "data_offset": 0, 00:16:21.894 "data_size": 65536 00:16:21.894 } 00:16:21.894 ] 00:16:21.894 }' 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.894 20:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.154 [2024-11-26 20:29:15.619445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:22.721 [2024-11-26 20:29:16.053641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:22.721 [2024-11-26 20:29:16.161782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:22.979 97.86 IOPS, 293.57 MiB/s [2024-11-26T20:29:16.534Z] 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.979 "name": "raid_bdev1", 00:16:22.979 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:22.979 "strip_size_kb": 0, 00:16:22.979 "state": "online", 00:16:22.979 "raid_level": "raid1", 00:16:22.979 "superblock": false, 00:16:22.979 "num_base_bdevs": 2, 00:16:22.979 "num_base_bdevs_discovered": 2, 00:16:22.979 "num_base_bdevs_operational": 2, 00:16:22.979 "process": { 00:16:22.979 "type": "rebuild", 00:16:22.979 "target": "spare", 00:16:22.979 "progress": { 00:16:22.979 "blocks": 61440, 00:16:22.979 "percent": 93 00:16:22.979 } 00:16:22.979 }, 00:16:22.979 "base_bdevs_list": [ 00:16:22.979 { 00:16:22.979 "name": "spare", 00:16:22.979 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:22.979 "is_configured": true, 00:16:22.979 "data_offset": 0, 00:16:22.979 "data_size": 65536 00:16:22.979 }, 00:16:22.979 { 00:16:22.979 "name": "BaseBdev2", 00:16:22.979 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:22.979 "is_configured": true, 00:16:22.979 "data_offset": 0, 00:16:22.979 "data_size": 65536 00:16:22.979 } 00:16:22.979 ] 00:16:22.979 }' 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.979 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.241 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.241 20:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.241 [2024-11-26 20:29:16.610960] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:23.241 [2024-11-26 20:29:16.710693] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:23.241 [2024-11-26 20:29:16.714425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.068 89.50 IOPS, 268.50 MiB/s [2024-11-26T20:29:17.623Z] 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.068 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.068 "name": "raid_bdev1", 00:16:24.068 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:24.068 "strip_size_kb": 0, 00:16:24.068 "state": "online", 00:16:24.068 "raid_level": "raid1", 00:16:24.068 "superblock": false, 00:16:24.068 "num_base_bdevs": 2, 00:16:24.068 "num_base_bdevs_discovered": 2, 00:16:24.068 "num_base_bdevs_operational": 2, 00:16:24.068 "base_bdevs_list": [ 00:16:24.068 { 00:16:24.068 "name": "spare", 00:16:24.068 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:24.068 "is_configured": true, 00:16:24.068 "data_offset": 0, 00:16:24.068 "data_size": 65536 00:16:24.068 }, 00:16:24.068 { 00:16:24.068 "name": "BaseBdev2", 00:16:24.068 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:24.068 "is_configured": true, 00:16:24.068 "data_offset": 0, 00:16:24.068 "data_size": 65536 00:16:24.068 } 00:16:24.068 ] 00:16:24.068 }' 00:16:24.328 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.328 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:24.328 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.328 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.329 "name": "raid_bdev1", 00:16:24.329 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:24.329 "strip_size_kb": 0, 00:16:24.329 "state": "online", 00:16:24.329 "raid_level": "raid1", 00:16:24.329 "superblock": false, 00:16:24.329 "num_base_bdevs": 2, 00:16:24.329 "num_base_bdevs_discovered": 2, 00:16:24.329 "num_base_bdevs_operational": 2, 00:16:24.329 "base_bdevs_list": [ 00:16:24.329 { 00:16:24.329 "name": "spare", 00:16:24.329 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:24.329 "is_configured": true, 00:16:24.329 "data_offset": 0, 00:16:24.329 "data_size": 65536 00:16:24.329 }, 00:16:24.329 { 00:16:24.329 "name": "BaseBdev2", 00:16:24.329 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:24.329 "is_configured": true, 00:16:24.329 "data_offset": 0, 00:16:24.329 "data_size": 65536 00:16:24.329 } 00:16:24.329 ] 00:16:24.329 }' 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.329 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.589 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.589 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.589 "name": "raid_bdev1", 00:16:24.589 "uuid": "05142d5e-913d-433d-954c-313f08c700a5", 00:16:24.589 "strip_size_kb": 0, 00:16:24.589 "state": "online", 00:16:24.589 "raid_level": "raid1", 00:16:24.589 "superblock": false, 00:16:24.589 "num_base_bdevs": 2, 00:16:24.589 "num_base_bdevs_discovered": 2, 00:16:24.589 "num_base_bdevs_operational": 2, 00:16:24.589 "base_bdevs_list": [ 00:16:24.589 { 00:16:24.589 "name": "spare", 00:16:24.589 "uuid": "eecac051-2421-50b8-8b35-b28fdff0f4f0", 00:16:24.589 "is_configured": true, 00:16:24.589 "data_offset": 0, 00:16:24.589 "data_size": 65536 00:16:24.589 }, 00:16:24.589 { 00:16:24.589 "name": "BaseBdev2", 00:16:24.589 "uuid": "695f5cad-9c46-5f74-89b9-e520dc4eaed4", 00:16:24.589 "is_configured": true, 00:16:24.589 "data_offset": 0, 00:16:24.589 "data_size": 65536 00:16:24.589 } 00:16:24.589 ] 00:16:24.589 }' 00:16:24.589 20:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.589 20:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.849 83.78 IOPS, 251.33 MiB/s [2024-11-26T20:29:18.404Z] 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:24.849 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.849 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.849 [2024-11-26 20:29:18.366941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.849 [2024-11-26 20:29:18.367054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.108 00:16:25.108 Latency(us) 00:16:25.109 [2024-11-26T20:29:18.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.109 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:25.109 raid_bdev1 : 9.18 83.15 249.45 0.00 0.00 16608.97 325.53 115389.15 00:16:25.109 [2024-11-26T20:29:18.664Z] =================================================================================================================== 00:16:25.109 [2024-11-26T20:29:18.664Z] Total : 83.15 249.45 0.00 0.00 16608.97 325.53 115389.15 00:16:25.109 [2024-11-26 20:29:18.492151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.109 [2024-11-26 20:29:18.492281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.109 [2024-11-26 20:29:18.492387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.109 [2024-11-26 20:29:18.492515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:25.109 { 00:16:25.109 "results": [ 00:16:25.109 { 00:16:25.109 "job": "raid_bdev1", 00:16:25.109 "core_mask": "0x1", 00:16:25.109 "workload": "randrw", 00:16:25.109 "percentage": 50, 00:16:25.109 "status": "finished", 00:16:25.109 "queue_depth": 2, 00:16:25.109 "io_size": 3145728, 00:16:25.109 "runtime": 9.176335, 00:16:25.109 "iops": 83.1486644722539, 00:16:25.109 "mibps": 249.4459934167617, 00:16:25.109 "io_failed": 0, 00:16:25.109 "io_timeout": 0, 00:16:25.109 "avg_latency_us": 16608.96974136796, 00:16:25.109 "min_latency_us": 325.5336244541485, 00:16:25.109 "max_latency_us": 115389.14934497817 00:16:25.109 } 00:16:25.109 ], 00:16:25.109 "core_count": 1 00:16:25.109 } 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.109 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:25.369 /dev/nbd0 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.369 1+0 records in 00:16:25.369 1+0 records out 00:16:25.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466363 s, 8.8 MB/s 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.369 20:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:25.629 /dev/nbd1 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.629 1+0 records in 00:16:25.629 1+0 records out 00:16:25.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567477 s, 7.2 MB/s 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.629 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.893 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.155 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76842 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76842 ']' 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76842 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76842 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76842' 00:16:26.415 killing process with pid 76842 00:16:26.415 Received shutdown signal, test time was about 10.546014 seconds 00:16:26.415 00:16:26.415 Latency(us) 00:16:26.415 [2024-11-26T20:29:19.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.415 [2024-11-26T20:29:19.970Z] =================================================================================================================== 00:16:26.415 [2024-11-26T20:29:19.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76842 00:16:26.415 [2024-11-26 20:29:19.833745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.415 20:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76842 00:16:26.674 [2024-11-26 20:29:20.088671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:28.095 00:16:28.095 real 0m13.933s 00:16:28.095 user 0m17.428s 00:16:28.095 sys 0m1.621s 00:16:28.095 ************************************ 00:16:28.095 END TEST raid_rebuild_test_io 00:16:28.095 ************************************ 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.095 20:29:21 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:28.095 20:29:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:28.095 20:29:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.095 20:29:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.095 ************************************ 00:16:28.095 START TEST raid_rebuild_test_sb_io 00:16:28.095 ************************************ 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77239 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77239 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77239 ']' 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.095 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.096 20:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.096 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:28.096 Zero copy mechanism will not be used. 00:16:28.096 [2024-11-26 20:29:21.540130] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:16:28.096 [2024-11-26 20:29:21.540267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77239 ] 00:16:28.355 [2024-11-26 20:29:21.716257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.355 [2024-11-26 20:29:21.835764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.615 [2024-11-26 20:29:22.045510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.615 [2024-11-26 20:29:22.045546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.874 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.874 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:28.874 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.874 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:28.874 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.874 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.134 BaseBdev1_malloc 00:16:29.134 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.134 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:29.134 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.134 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.134 [2024-11-26 20:29:22.444771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:29.135 [2024-11-26 20:29:22.444921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.135 [2024-11-26 20:29:22.444971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:29.135 [2024-11-26 20:29:22.445015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.135 [2024-11-26 20:29:22.447491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.135 [2024-11-26 20:29:22.447575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.135 BaseBdev1 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 BaseBdev2_malloc 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 [2024-11-26 20:29:22.503489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:29.135 [2024-11-26 20:29:22.503617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.135 [2024-11-26 20:29:22.503667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:29.135 [2024-11-26 20:29:22.503711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.135 [2024-11-26 20:29:22.506213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.135 [2024-11-26 20:29:22.506322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:29.135 BaseBdev2 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 spare_malloc 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 spare_delay 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 [2024-11-26 20:29:22.582837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.135 [2024-11-26 20:29:22.582901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.135 [2024-11-26 20:29:22.582922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:29.135 [2024-11-26 20:29:22.582935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.135 [2024-11-26 20:29:22.585378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.135 [2024-11-26 20:29:22.585419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.135 spare 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 [2024-11-26 20:29:22.594889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.135 [2024-11-26 20:29:22.596980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.135 [2024-11-26 20:29:22.597293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:29.135 [2024-11-26 20:29:22.597320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:29.135 [2024-11-26 20:29:22.597642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:29.135 [2024-11-26 20:29:22.597856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:29.135 [2024-11-26 20:29:22.597869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:29.135 [2024-11-26 20:29:22.598051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.135 "name": "raid_bdev1", 00:16:29.135 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:29.135 "strip_size_kb": 0, 00:16:29.135 "state": "online", 00:16:29.135 "raid_level": "raid1", 00:16:29.135 "superblock": true, 00:16:29.135 "num_base_bdevs": 2, 00:16:29.135 "num_base_bdevs_discovered": 2, 00:16:29.135 "num_base_bdevs_operational": 2, 00:16:29.135 "base_bdevs_list": [ 00:16:29.135 { 00:16:29.135 "name": "BaseBdev1", 00:16:29.135 "uuid": "c92b11dd-b414-51fc-907f-2897e05ca1a5", 00:16:29.135 "is_configured": true, 00:16:29.135 "data_offset": 2048, 00:16:29.135 "data_size": 63488 00:16:29.135 }, 00:16:29.135 { 00:16:29.135 "name": "BaseBdev2", 00:16:29.135 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:29.135 "is_configured": true, 00:16:29.135 "data_offset": 2048, 00:16:29.135 "data_size": 63488 00:16:29.135 } 00:16:29.135 ] 00:16:29.135 }' 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.135 20:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 [2024-11-26 20:29:23.042442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 [2024-11-26 20:29:23.102005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.704 "name": "raid_bdev1", 00:16:29.704 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:29.704 "strip_size_kb": 0, 00:16:29.704 "state": "online", 00:16:29.704 "raid_level": "raid1", 00:16:29.704 "superblock": true, 00:16:29.704 "num_base_bdevs": 2, 00:16:29.704 "num_base_bdevs_discovered": 1, 00:16:29.704 "num_base_bdevs_operational": 1, 00:16:29.704 "base_bdevs_list": [ 00:16:29.704 { 00:16:29.704 "name": null, 00:16:29.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.704 "is_configured": false, 00:16:29.704 "data_offset": 0, 00:16:29.704 "data_size": 63488 00:16:29.704 }, 00:16:29.704 { 00:16:29.704 "name": "BaseBdev2", 00:16:29.704 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:29.704 "is_configured": true, 00:16:29.704 "data_offset": 2048, 00:16:29.704 "data_size": 63488 00:16:29.704 } 00:16:29.704 ] 00:16:29.704 }' 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.704 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 [2024-11-26 20:29:23.218552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:29.704 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:29.704 Zero copy mechanism will not be used. 00:16:29.704 Running I/O for 60 seconds... 00:16:29.964 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.964 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.964 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.222 [2024-11-26 20:29:23.521431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.222 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.222 20:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:30.223 [2024-11-26 20:29:23.583740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:30.223 [2024-11-26 20:29:23.585879] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.223 [2024-11-26 20:29:23.687500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:30.223 [2024-11-26 20:29:23.688211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:30.481 [2024-11-26 20:29:23.898398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:30.481 [2024-11-26 20:29:23.898860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:30.739 170.00 IOPS, 510.00 MiB/s [2024-11-26T20:29:24.294Z] [2024-11-26 20:29:24.231550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:30.739 [2024-11-26 20:29:24.232121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:30.997 [2024-11-26 20:29:24.494003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.255 "name": "raid_bdev1", 00:16:31.255 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:31.255 "strip_size_kb": 0, 00:16:31.255 "state": "online", 00:16:31.255 "raid_level": "raid1", 00:16:31.255 "superblock": true, 00:16:31.255 "num_base_bdevs": 2, 00:16:31.255 "num_base_bdevs_discovered": 2, 00:16:31.255 "num_base_bdevs_operational": 2, 00:16:31.255 "process": { 00:16:31.255 "type": "rebuild", 00:16:31.255 "target": "spare", 00:16:31.255 "progress": { 00:16:31.255 "blocks": 10240, 00:16:31.255 "percent": 16 00:16:31.255 } 00:16:31.255 }, 00:16:31.255 "base_bdevs_list": [ 00:16:31.255 { 00:16:31.255 "name": "spare", 00:16:31.255 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:31.255 "is_configured": true, 00:16:31.255 "data_offset": 2048, 00:16:31.255 "data_size": 63488 00:16:31.255 }, 00:16:31.255 { 00:16:31.255 "name": "BaseBdev2", 00:16:31.255 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:31.255 "is_configured": true, 00:16:31.255 "data_offset": 2048, 00:16:31.255 "data_size": 63488 00:16:31.255 } 00:16:31.255 ] 00:16:31.255 }' 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.255 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.255 [2024-11-26 20:29:24.725902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.515 [2024-11-26 20:29:24.855060] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.515 [2024-11-26 20:29:24.870817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.515 [2024-11-26 20:29:24.870882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.515 [2024-11-26 20:29:24.870898] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.515 [2024-11-26 20:29:24.922320] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.515 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.515 "name": "raid_bdev1", 00:16:31.515 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:31.515 "strip_size_kb": 0, 00:16:31.515 "state": "online", 00:16:31.515 "raid_level": "raid1", 00:16:31.515 "superblock": true, 00:16:31.516 "num_base_bdevs": 2, 00:16:31.516 "num_base_bdevs_discovered": 1, 00:16:31.516 "num_base_bdevs_operational": 1, 00:16:31.516 "base_bdevs_list": [ 00:16:31.516 { 00:16:31.516 "name": null, 00:16:31.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.516 "is_configured": false, 00:16:31.516 "data_offset": 0, 00:16:31.516 "data_size": 63488 00:16:31.516 }, 00:16:31.516 { 00:16:31.516 "name": "BaseBdev2", 00:16:31.516 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:31.516 "is_configured": true, 00:16:31.516 "data_offset": 2048, 00:16:31.516 "data_size": 63488 00:16:31.516 } 00:16:31.516 ] 00:16:31.516 }' 00:16:31.516 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.516 20:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.035 146.50 IOPS, 439.50 MiB/s [2024-11-26T20:29:25.590Z] 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.035 "name": "raid_bdev1", 00:16:32.035 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:32.035 "strip_size_kb": 0, 00:16:32.035 "state": "online", 00:16:32.035 "raid_level": "raid1", 00:16:32.035 "superblock": true, 00:16:32.035 "num_base_bdevs": 2, 00:16:32.035 "num_base_bdevs_discovered": 1, 00:16:32.035 "num_base_bdevs_operational": 1, 00:16:32.035 "base_bdevs_list": [ 00:16:32.035 { 00:16:32.035 "name": null, 00:16:32.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.035 "is_configured": false, 00:16:32.035 "data_offset": 0, 00:16:32.035 "data_size": 63488 00:16:32.035 }, 00:16:32.035 { 00:16:32.035 "name": "BaseBdev2", 00:16:32.035 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:32.035 "is_configured": true, 00:16:32.035 "data_offset": 2048, 00:16:32.035 "data_size": 63488 00:16:32.035 } 00:16:32.035 ] 00:16:32.035 }' 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.035 [2024-11-26 20:29:25.490675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.035 20:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:32.035 [2024-11-26 20:29:25.543761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:32.035 [2024-11-26 20:29:25.545931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.302 [2024-11-26 20:29:25.676041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:32.302 [2024-11-26 20:29:25.676696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:32.576 [2024-11-26 20:29:25.915403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:32.576 [2024-11-26 20:29:25.915743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:32.835 142.00 IOPS, 426.00 MiB/s [2024-11-26T20:29:26.390Z] [2024-11-26 20:29:26.252110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:33.095 [2024-11-26 20:29:26.462808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.095 "name": "raid_bdev1", 00:16:33.095 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:33.095 "strip_size_kb": 0, 00:16:33.095 "state": "online", 00:16:33.095 "raid_level": "raid1", 00:16:33.095 "superblock": true, 00:16:33.095 "num_base_bdevs": 2, 00:16:33.095 "num_base_bdevs_discovered": 2, 00:16:33.095 "num_base_bdevs_operational": 2, 00:16:33.095 "process": { 00:16:33.095 "type": "rebuild", 00:16:33.095 "target": "spare", 00:16:33.095 "progress": { 00:16:33.095 "blocks": 10240, 00:16:33.095 "percent": 16 00:16:33.095 } 00:16:33.095 }, 00:16:33.095 "base_bdevs_list": [ 00:16:33.095 { 00:16:33.095 "name": "spare", 00:16:33.095 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:33.095 "is_configured": true, 00:16:33.095 "data_offset": 2048, 00:16:33.095 "data_size": 63488 00:16:33.095 }, 00:16:33.095 { 00:16:33.095 "name": "BaseBdev2", 00:16:33.095 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:33.095 "is_configured": true, 00:16:33.095 "data_offset": 2048, 00:16:33.095 "data_size": 63488 00:16:33.095 } 00:16:33.095 ] 00:16:33.095 }' 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.095 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:33.355 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.355 [2024-11-26 20:29:26.682950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.355 [2024-11-26 20:29:26.683535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.355 "name": "raid_bdev1", 00:16:33.355 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:33.355 "strip_size_kb": 0, 00:16:33.355 "state": "online", 00:16:33.355 "raid_level": "raid1", 00:16:33.355 "superblock": true, 00:16:33.355 "num_base_bdevs": 2, 00:16:33.355 "num_base_bdevs_discovered": 2, 00:16:33.355 "num_base_bdevs_operational": 2, 00:16:33.355 "process": { 00:16:33.355 "type": "rebuild", 00:16:33.355 "target": "spare", 00:16:33.355 "progress": { 00:16:33.355 "blocks": 14336, 00:16:33.355 "percent": 22 00:16:33.355 } 00:16:33.355 }, 00:16:33.355 "base_bdevs_list": [ 00:16:33.355 { 00:16:33.355 "name": "spare", 00:16:33.355 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:33.355 "is_configured": true, 00:16:33.355 "data_offset": 2048, 00:16:33.355 "data_size": 63488 00:16:33.355 }, 00:16:33.355 { 00:16:33.355 "name": "BaseBdev2", 00:16:33.355 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:33.355 "is_configured": true, 00:16:33.355 "data_offset": 2048, 00:16:33.355 "data_size": 63488 00:16:33.355 } 00:16:33.355 ] 00:16:33.355 }' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.355 20:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.355 [2024-11-26 20:29:26.893227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:33.355 [2024-11-26 20:29:26.893697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:33.923 122.75 IOPS, 368.25 MiB/s [2024-11-26T20:29:27.478Z] [2024-11-26 20:29:27.220894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:33.923 [2024-11-26 20:29:27.329323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:34.183 [2024-11-26 20:29:27.663110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:34.441 [2024-11-26 20:29:27.782471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.441 "name": "raid_bdev1", 00:16:34.441 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:34.441 "strip_size_kb": 0, 00:16:34.441 "state": "online", 00:16:34.441 "raid_level": "raid1", 00:16:34.441 "superblock": true, 00:16:34.441 "num_base_bdevs": 2, 00:16:34.441 "num_base_bdevs_discovered": 2, 00:16:34.441 "num_base_bdevs_operational": 2, 00:16:34.441 "process": { 00:16:34.441 "type": "rebuild", 00:16:34.441 "target": "spare", 00:16:34.441 "progress": { 00:16:34.441 "blocks": 28672, 00:16:34.441 "percent": 45 00:16:34.441 } 00:16:34.441 }, 00:16:34.441 "base_bdevs_list": [ 00:16:34.441 { 00:16:34.441 "name": "spare", 00:16:34.441 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:34.441 "is_configured": true, 00:16:34.441 "data_offset": 2048, 00:16:34.441 "data_size": 63488 00:16:34.441 }, 00:16:34.441 { 00:16:34.441 "name": "BaseBdev2", 00:16:34.441 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:34.441 "is_configured": true, 00:16:34.441 "data_offset": 2048, 00:16:34.441 "data_size": 63488 00:16:34.441 } 00:16:34.441 ] 00:16:34.441 }' 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.441 20:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.960 112.40 IOPS, 337.20 MiB/s [2024-11-26T20:29:28.515Z] [2024-11-26 20:29:28.322084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:34.960 [2024-11-26 20:29:28.452203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.529 20:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.529 20:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.529 "name": "raid_bdev1", 00:16:35.529 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:35.529 "strip_size_kb": 0, 00:16:35.529 "state": "online", 00:16:35.529 "raid_level": "raid1", 00:16:35.529 "superblock": true, 00:16:35.529 "num_base_bdevs": 2, 00:16:35.529 "num_base_bdevs_discovered": 2, 00:16:35.529 "num_base_bdevs_operational": 2, 00:16:35.529 "process": { 00:16:35.529 "type": "rebuild", 00:16:35.529 "target": "spare", 00:16:35.529 "progress": { 00:16:35.529 "blocks": 47104, 00:16:35.529 "percent": 74 00:16:35.529 } 00:16:35.529 }, 00:16:35.529 "base_bdevs_list": [ 00:16:35.529 { 00:16:35.529 "name": "spare", 00:16:35.529 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:35.529 "is_configured": true, 00:16:35.529 "data_offset": 2048, 00:16:35.529 "data_size": 63488 00:16:35.529 }, 00:16:35.529 { 00:16:35.529 "name": "BaseBdev2", 00:16:35.529 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:35.529 "is_configured": true, 00:16:35.529 "data_offset": 2048, 00:16:35.529 "data_size": 63488 00:16:35.529 } 00:16:35.529 ] 00:16:35.529 }' 00:16:35.529 20:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.529 20:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.529 20:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.789 20:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.789 20:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.357 101.50 IOPS, 304.50 MiB/s [2024-11-26T20:29:29.912Z] [2024-11-26 20:29:29.785057] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:36.357 [2024-11-26 20:29:29.884854] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:36.357 [2024-11-26 20:29:29.887434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.616 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.616 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.616 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.616 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.617 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.876 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.876 "name": "raid_bdev1", 00:16:36.876 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:36.876 "strip_size_kb": 0, 00:16:36.876 "state": "online", 00:16:36.876 "raid_level": "raid1", 00:16:36.876 "superblock": true, 00:16:36.876 "num_base_bdevs": 2, 00:16:36.876 "num_base_bdevs_discovered": 2, 00:16:36.876 "num_base_bdevs_operational": 2, 00:16:36.876 "base_bdevs_list": [ 00:16:36.876 { 00:16:36.876 "name": "spare", 00:16:36.876 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:36.876 "is_configured": true, 00:16:36.876 "data_offset": 2048, 00:16:36.877 "data_size": 63488 00:16:36.877 }, 00:16:36.877 { 00:16:36.877 "name": "BaseBdev2", 00:16:36.877 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:36.877 "is_configured": true, 00:16:36.877 "data_offset": 2048, 00:16:36.877 "data_size": 63488 00:16:36.877 } 00:16:36.877 ] 00:16:36.877 }' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.877 91.29 IOPS, 273.86 MiB/s [2024-11-26T20:29:30.432Z] 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.877 "name": "raid_bdev1", 00:16:36.877 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:36.877 "strip_size_kb": 0, 00:16:36.877 "state": "online", 00:16:36.877 "raid_level": "raid1", 00:16:36.877 "superblock": true, 00:16:36.877 "num_base_bdevs": 2, 00:16:36.877 "num_base_bdevs_discovered": 2, 00:16:36.877 "num_base_bdevs_operational": 2, 00:16:36.877 "base_bdevs_list": [ 00:16:36.877 { 00:16:36.877 "name": "spare", 00:16:36.877 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:36.877 "is_configured": true, 00:16:36.877 "data_offset": 2048, 00:16:36.877 "data_size": 63488 00:16:36.877 }, 00:16:36.877 { 00:16:36.877 "name": "BaseBdev2", 00:16:36.877 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:36.877 "is_configured": true, 00:16:36.877 "data_offset": 2048, 00:16:36.877 "data_size": 63488 00:16:36.877 } 00:16:36.877 ] 00:16:36.877 }' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.877 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.195 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.195 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.195 "name": "raid_bdev1", 00:16:37.195 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:37.195 "strip_size_kb": 0, 00:16:37.195 "state": "online", 00:16:37.195 "raid_level": "raid1", 00:16:37.195 "superblock": true, 00:16:37.195 "num_base_bdevs": 2, 00:16:37.195 "num_base_bdevs_discovered": 2, 00:16:37.195 "num_base_bdevs_operational": 2, 00:16:37.195 "base_bdevs_list": [ 00:16:37.195 { 00:16:37.195 "name": "spare", 00:16:37.195 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:37.195 "is_configured": true, 00:16:37.195 "data_offset": 2048, 00:16:37.195 "data_size": 63488 00:16:37.195 }, 00:16:37.195 { 00:16:37.195 "name": "BaseBdev2", 00:16:37.195 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:37.195 "is_configured": true, 00:16:37.195 "data_offset": 2048, 00:16:37.195 "data_size": 63488 00:16:37.195 } 00:16:37.195 ] 00:16:37.195 }' 00:16:37.195 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.195 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.454 [2024-11-26 20:29:30.861308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.454 [2024-11-26 20:29:30.861409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.454 00:16:37.454 Latency(us) 00:16:37.454 [2024-11-26T20:29:31.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.454 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:37.454 raid_bdev1 : 7.76 85.23 255.70 0.00 0.00 15075.24 316.59 116304.94 00:16:37.454 [2024-11-26T20:29:31.009Z] =================================================================================================================== 00:16:37.454 [2024-11-26T20:29:31.009Z] Total : 85.23 255.70 0.00 0.00 15075.24 316.59 116304.94 00:16:37.454 [2024-11-26 20:29:30.986850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.454 { 00:16:37.454 "results": [ 00:16:37.454 { 00:16:37.454 "job": "raid_bdev1", 00:16:37.454 "core_mask": "0x1", 00:16:37.454 "workload": "randrw", 00:16:37.454 "percentage": 50, 00:16:37.454 "status": "finished", 00:16:37.454 "queue_depth": 2, 00:16:37.454 "io_size": 3145728, 00:16:37.454 "runtime": 7.755041, 00:16:37.454 "iops": 85.23488141455346, 00:16:37.454 "mibps": 255.70464424366037, 00:16:37.454 "io_failed": 0, 00:16:37.454 "io_timeout": 0, 00:16:37.454 "avg_latency_us": 15075.235875245262, 00:16:37.454 "min_latency_us": 316.5903930131004, 00:16:37.454 "max_latency_us": 116304.93624454149 00:16:37.454 } 00:16:37.454 ], 00:16:37.454 "core_count": 1 00:16:37.454 } 00:16:37.454 [2024-11-26 20:29:30.987009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.454 [2024-11-26 20:29:30.987126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.454 [2024-11-26 20:29:30.987141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.454 20:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:37.454 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.714 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:37.973 /dev/nbd0 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.973 1+0 records in 00:16:37.973 1+0 records out 00:16:37.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587609 s, 7.0 MB/s 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.973 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:38.233 /dev/nbd1 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:38.233 1+0 records in 00:16:38.233 1+0 records out 00:16:38.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267347 s, 15.3 MB/s 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:38.233 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.493 20:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.493 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.062 [2024-11-26 20:29:32.370760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:39.062 [2024-11-26 20:29:32.370907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.062 [2024-11-26 20:29:32.370972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:39.062 [2024-11-26 20:29:32.371018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.062 [2024-11-26 20:29:32.373642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.062 [2024-11-26 20:29:32.373726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:39.062 [2024-11-26 20:29:32.373885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:39.062 [2024-11-26 20:29:32.373977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.062 [2024-11-26 20:29:32.374197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.062 spare 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.062 [2024-11-26 20:29:32.474195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:39.062 [2024-11-26 20:29:32.474354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:39.062 [2024-11-26 20:29:32.474787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:39.062 [2024-11-26 20:29:32.475110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:39.062 [2024-11-26 20:29:32.475171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:39.062 [2024-11-26 20:29:32.475451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.062 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.062 "name": "raid_bdev1", 00:16:39.062 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:39.062 "strip_size_kb": 0, 00:16:39.062 "state": "online", 00:16:39.062 "raid_level": "raid1", 00:16:39.062 "superblock": true, 00:16:39.062 "num_base_bdevs": 2, 00:16:39.062 "num_base_bdevs_discovered": 2, 00:16:39.062 "num_base_bdevs_operational": 2, 00:16:39.062 "base_bdevs_list": [ 00:16:39.062 { 00:16:39.063 "name": "spare", 00:16:39.063 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:39.063 "is_configured": true, 00:16:39.063 "data_offset": 2048, 00:16:39.063 "data_size": 63488 00:16:39.063 }, 00:16:39.063 { 00:16:39.063 "name": "BaseBdev2", 00:16:39.063 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:39.063 "is_configured": true, 00:16:39.063 "data_offset": 2048, 00:16:39.063 "data_size": 63488 00:16:39.063 } 00:16:39.063 ] 00:16:39.063 }' 00:16:39.063 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.063 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.631 "name": "raid_bdev1", 00:16:39.631 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:39.631 "strip_size_kb": 0, 00:16:39.631 "state": "online", 00:16:39.631 "raid_level": "raid1", 00:16:39.631 "superblock": true, 00:16:39.631 "num_base_bdevs": 2, 00:16:39.631 "num_base_bdevs_discovered": 2, 00:16:39.631 "num_base_bdevs_operational": 2, 00:16:39.631 "base_bdevs_list": [ 00:16:39.631 { 00:16:39.631 "name": "spare", 00:16:39.631 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:39.631 "is_configured": true, 00:16:39.631 "data_offset": 2048, 00:16:39.631 "data_size": 63488 00:16:39.631 }, 00:16:39.631 { 00:16:39.631 "name": "BaseBdev2", 00:16:39.631 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:39.631 "is_configured": true, 00:16:39.631 "data_offset": 2048, 00:16:39.631 "data_size": 63488 00:16:39.631 } 00:16:39.631 ] 00:16:39.631 }' 00:16:39.631 20:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.631 [2024-11-26 20:29:33.114415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.631 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.632 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.632 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.632 "name": "raid_bdev1", 00:16:39.632 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:39.632 "strip_size_kb": 0, 00:16:39.632 "state": "online", 00:16:39.632 "raid_level": "raid1", 00:16:39.632 "superblock": true, 00:16:39.632 "num_base_bdevs": 2, 00:16:39.632 "num_base_bdevs_discovered": 1, 00:16:39.632 "num_base_bdevs_operational": 1, 00:16:39.632 "base_bdevs_list": [ 00:16:39.632 { 00:16:39.632 "name": null, 00:16:39.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.632 "is_configured": false, 00:16:39.632 "data_offset": 0, 00:16:39.632 "data_size": 63488 00:16:39.632 }, 00:16:39.632 { 00:16:39.632 "name": "BaseBdev2", 00:16:39.632 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:39.632 "is_configured": true, 00:16:39.632 "data_offset": 2048, 00:16:39.632 "data_size": 63488 00:16:39.632 } 00:16:39.632 ] 00:16:39.632 }' 00:16:39.632 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.632 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.199 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:40.199 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.199 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.199 [2024-11-26 20:29:33.517822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.199 [2024-11-26 20:29:33.518109] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.199 [2024-11-26 20:29:33.518185] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:40.199 [2024-11-26 20:29:33.518275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.199 [2024-11-26 20:29:33.537445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:40.199 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.199 20:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:40.199 [2024-11-26 20:29:33.539817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.148 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.148 "name": "raid_bdev1", 00:16:41.148 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:41.148 "strip_size_kb": 0, 00:16:41.148 "state": "online", 00:16:41.148 "raid_level": "raid1", 00:16:41.148 "superblock": true, 00:16:41.148 "num_base_bdevs": 2, 00:16:41.148 "num_base_bdevs_discovered": 2, 00:16:41.148 "num_base_bdevs_operational": 2, 00:16:41.148 "process": { 00:16:41.148 "type": "rebuild", 00:16:41.148 "target": "spare", 00:16:41.148 "progress": { 00:16:41.148 "blocks": 20480, 00:16:41.148 "percent": 32 00:16:41.148 } 00:16:41.148 }, 00:16:41.148 "base_bdevs_list": [ 00:16:41.148 { 00:16:41.148 "name": "spare", 00:16:41.148 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 2048, 00:16:41.149 "data_size": 63488 00:16:41.149 }, 00:16:41.149 { 00:16:41.149 "name": "BaseBdev2", 00:16:41.149 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:41.149 "is_configured": true, 00:16:41.149 "data_offset": 2048, 00:16:41.149 "data_size": 63488 00:16:41.149 } 00:16:41.149 ] 00:16:41.149 }' 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.149 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.407 [2024-11-26 20:29:34.703144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.407 [2024-11-26 20:29:34.746146] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:41.407 [2024-11-26 20:29:34.746218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.407 [2024-11-26 20:29:34.746250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.407 [2024-11-26 20:29:34.746260] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.407 "name": "raid_bdev1", 00:16:41.407 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:41.407 "strip_size_kb": 0, 00:16:41.407 "state": "online", 00:16:41.407 "raid_level": "raid1", 00:16:41.407 "superblock": true, 00:16:41.407 "num_base_bdevs": 2, 00:16:41.407 "num_base_bdevs_discovered": 1, 00:16:41.407 "num_base_bdevs_operational": 1, 00:16:41.407 "base_bdevs_list": [ 00:16:41.407 { 00:16:41.407 "name": null, 00:16:41.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.407 "is_configured": false, 00:16:41.407 "data_offset": 0, 00:16:41.407 "data_size": 63488 00:16:41.407 }, 00:16:41.407 { 00:16:41.407 "name": "BaseBdev2", 00:16:41.407 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:41.407 "is_configured": true, 00:16:41.407 "data_offset": 2048, 00:16:41.407 "data_size": 63488 00:16:41.407 } 00:16:41.407 ] 00:16:41.407 }' 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.407 20:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.974 20:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.974 20:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.974 20:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.974 [2024-11-26 20:29:35.288735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.974 [2024-11-26 20:29:35.288918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.974 [2024-11-26 20:29:35.289044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:41.974 [2024-11-26 20:29:35.289084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.974 [2024-11-26 20:29:35.289702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.974 [2024-11-26 20:29:35.289772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.974 [2024-11-26 20:29:35.289928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.974 [2024-11-26 20:29:35.289975] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.974 [2024-11-26 20:29:35.290031] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.974 [2024-11-26 20:29:35.290094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.974 [2024-11-26 20:29:35.309997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:41.974 spare 00:16:41.974 20:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.974 20:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:41.974 [2024-11-26 20:29:35.312305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.907 "name": "raid_bdev1", 00:16:42.907 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:42.907 "strip_size_kb": 0, 00:16:42.907 "state": "online", 00:16:42.907 "raid_level": "raid1", 00:16:42.907 "superblock": true, 00:16:42.907 "num_base_bdevs": 2, 00:16:42.907 "num_base_bdevs_discovered": 2, 00:16:42.907 "num_base_bdevs_operational": 2, 00:16:42.907 "process": { 00:16:42.907 "type": "rebuild", 00:16:42.907 "target": "spare", 00:16:42.907 "progress": { 00:16:42.907 "blocks": 20480, 00:16:42.907 "percent": 32 00:16:42.907 } 00:16:42.907 }, 00:16:42.907 "base_bdevs_list": [ 00:16:42.907 { 00:16:42.907 "name": "spare", 00:16:42.907 "uuid": "8d291388-3d11-5318-b005-f8fa7ecf5c12", 00:16:42.907 "is_configured": true, 00:16:42.907 "data_offset": 2048, 00:16:42.907 "data_size": 63488 00:16:42.907 }, 00:16:42.907 { 00:16:42.907 "name": "BaseBdev2", 00:16:42.907 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:42.907 "is_configured": true, 00:16:42.907 "data_offset": 2048, 00:16:42.907 "data_size": 63488 00:16:42.907 } 00:16:42.907 ] 00:16:42.907 }' 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.907 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.165 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.166 [2024-11-26 20:29:36.483583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.166 [2024-11-26 20:29:36.518528] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.166 [2024-11-26 20:29:36.518702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.166 [2024-11-26 20:29:36.518749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.166 [2024-11-26 20:29:36.518778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.166 "name": "raid_bdev1", 00:16:43.166 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:43.166 "strip_size_kb": 0, 00:16:43.166 "state": "online", 00:16:43.166 "raid_level": "raid1", 00:16:43.166 "superblock": true, 00:16:43.166 "num_base_bdevs": 2, 00:16:43.166 "num_base_bdevs_discovered": 1, 00:16:43.166 "num_base_bdevs_operational": 1, 00:16:43.166 "base_bdevs_list": [ 00:16:43.166 { 00:16:43.166 "name": null, 00:16:43.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.166 "is_configured": false, 00:16:43.166 "data_offset": 0, 00:16:43.166 "data_size": 63488 00:16:43.166 }, 00:16:43.166 { 00:16:43.166 "name": "BaseBdev2", 00:16:43.166 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:43.166 "is_configured": true, 00:16:43.166 "data_offset": 2048, 00:16:43.166 "data_size": 63488 00:16:43.166 } 00:16:43.166 ] 00:16:43.166 }' 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.166 20:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.732 "name": "raid_bdev1", 00:16:43.732 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:43.732 "strip_size_kb": 0, 00:16:43.732 "state": "online", 00:16:43.732 "raid_level": "raid1", 00:16:43.732 "superblock": true, 00:16:43.732 "num_base_bdevs": 2, 00:16:43.732 "num_base_bdevs_discovered": 1, 00:16:43.732 "num_base_bdevs_operational": 1, 00:16:43.732 "base_bdevs_list": [ 00:16:43.732 { 00:16:43.732 "name": null, 00:16:43.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.732 "is_configured": false, 00:16:43.732 "data_offset": 0, 00:16:43.732 "data_size": 63488 00:16:43.732 }, 00:16:43.732 { 00:16:43.732 "name": "BaseBdev2", 00:16:43.732 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:43.732 "is_configured": true, 00:16:43.732 "data_offset": 2048, 00:16:43.732 "data_size": 63488 00:16:43.732 } 00:16:43.732 ] 00:16:43.732 }' 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.732 [2024-11-26 20:29:37.209803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.732 [2024-11-26 20:29:37.209935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.732 [2024-11-26 20:29:37.209995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:43.732 [2024-11-26 20:29:37.210041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.732 [2024-11-26 20:29:37.210618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.732 [2024-11-26 20:29:37.210694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.732 [2024-11-26 20:29:37.210826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:43.732 [2024-11-26 20:29:37.210879] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.732 [2024-11-26 20:29:37.210928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.732 [2024-11-26 20:29:37.210972] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:43.732 BaseBdev1 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.732 20:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:44.666 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.666 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.925 "name": "raid_bdev1", 00:16:44.925 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:44.925 "strip_size_kb": 0, 00:16:44.925 "state": "online", 00:16:44.925 "raid_level": "raid1", 00:16:44.925 "superblock": true, 00:16:44.925 "num_base_bdevs": 2, 00:16:44.925 "num_base_bdevs_discovered": 1, 00:16:44.925 "num_base_bdevs_operational": 1, 00:16:44.925 "base_bdevs_list": [ 00:16:44.925 { 00:16:44.925 "name": null, 00:16:44.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.925 "is_configured": false, 00:16:44.925 "data_offset": 0, 00:16:44.925 "data_size": 63488 00:16:44.925 }, 00:16:44.925 { 00:16:44.925 "name": "BaseBdev2", 00:16:44.925 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:44.925 "is_configured": true, 00:16:44.925 "data_offset": 2048, 00:16:44.925 "data_size": 63488 00:16:44.925 } 00:16:44.925 ] 00:16:44.925 }' 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.925 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.209 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.209 "name": "raid_bdev1", 00:16:45.209 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:45.209 "strip_size_kb": 0, 00:16:45.209 "state": "online", 00:16:45.209 "raid_level": "raid1", 00:16:45.209 "superblock": true, 00:16:45.209 "num_base_bdevs": 2, 00:16:45.209 "num_base_bdevs_discovered": 1, 00:16:45.209 "num_base_bdevs_operational": 1, 00:16:45.209 "base_bdevs_list": [ 00:16:45.209 { 00:16:45.209 "name": null, 00:16:45.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.209 "is_configured": false, 00:16:45.209 "data_offset": 0, 00:16:45.209 "data_size": 63488 00:16:45.209 }, 00:16:45.209 { 00:16:45.209 "name": "BaseBdev2", 00:16:45.209 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:45.209 "is_configured": true, 00:16:45.209 "data_offset": 2048, 00:16:45.209 "data_size": 63488 00:16:45.209 } 00:16:45.209 ] 00:16:45.209 }' 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.468 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.468 [2024-11-26 20:29:38.863479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.468 [2024-11-26 20:29:38.863663] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.468 [2024-11-26 20:29:38.863678] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:45.468 request: 00:16:45.468 { 00:16:45.468 "base_bdev": "BaseBdev1", 00:16:45.468 "raid_bdev": "raid_bdev1", 00:16:45.468 "method": "bdev_raid_add_base_bdev", 00:16:45.468 "req_id": 1 00:16:45.468 } 00:16:45.468 Got JSON-RPC error response 00:16:45.468 response: 00:16:45.468 { 00:16:45.469 "code": -22, 00:16:45.469 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:45.469 } 00:16:45.469 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:45.469 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:45.469 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:45.469 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:45.469 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:45.469 20:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.406 "name": "raid_bdev1", 00:16:46.406 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:46.406 "strip_size_kb": 0, 00:16:46.406 "state": "online", 00:16:46.406 "raid_level": "raid1", 00:16:46.406 "superblock": true, 00:16:46.406 "num_base_bdevs": 2, 00:16:46.406 "num_base_bdevs_discovered": 1, 00:16:46.406 "num_base_bdevs_operational": 1, 00:16:46.406 "base_bdevs_list": [ 00:16:46.406 { 00:16:46.406 "name": null, 00:16:46.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.406 "is_configured": false, 00:16:46.406 "data_offset": 0, 00:16:46.406 "data_size": 63488 00:16:46.406 }, 00:16:46.406 { 00:16:46.406 "name": "BaseBdev2", 00:16:46.406 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:46.406 "is_configured": true, 00:16:46.406 "data_offset": 2048, 00:16:46.406 "data_size": 63488 00:16:46.406 } 00:16:46.406 ] 00:16:46.406 }' 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.406 20:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.975 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.975 "name": "raid_bdev1", 00:16:46.975 "uuid": "d6893550-1bfc-4a2c-925f-be0b96fd8576", 00:16:46.975 "strip_size_kb": 0, 00:16:46.975 "state": "online", 00:16:46.975 "raid_level": "raid1", 00:16:46.975 "superblock": true, 00:16:46.975 "num_base_bdevs": 2, 00:16:46.975 "num_base_bdevs_discovered": 1, 00:16:46.975 "num_base_bdevs_operational": 1, 00:16:46.975 "base_bdevs_list": [ 00:16:46.975 { 00:16:46.975 "name": null, 00:16:46.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.975 "is_configured": false, 00:16:46.975 "data_offset": 0, 00:16:46.975 "data_size": 63488 00:16:46.976 }, 00:16:46.976 { 00:16:46.976 "name": "BaseBdev2", 00:16:46.976 "uuid": "8d03e9e8-b892-53f5-8b64-8f3ff258b99a", 00:16:46.976 "is_configured": true, 00:16:46.976 "data_offset": 2048, 00:16:46.976 "data_size": 63488 00:16:46.976 } 00:16:46.976 ] 00:16:46.976 }' 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77239 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77239 ']' 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77239 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.976 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77239 00:16:47.235 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.235 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.235 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77239' 00:16:47.235 killing process with pid 77239 00:16:47.235 Received shutdown signal, test time was about 17.346112 seconds 00:16:47.235 00:16:47.235 Latency(us) 00:16:47.235 [2024-11-26T20:29:40.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.235 [2024-11-26T20:29:40.790Z] =================================================================================================================== 00:16:47.235 [2024-11-26T20:29:40.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.235 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77239 00:16:47.235 [2024-11-26 20:29:40.533655] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.235 [2024-11-26 20:29:40.533795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.235 20:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77239 00:16:47.235 [2024-11-26 20:29:40.533858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.235 [2024-11-26 20:29:40.533870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:47.494 [2024-11-26 20:29:40.808232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.873 00:16:48.873 real 0m20.734s 00:16:48.873 user 0m27.186s 00:16:48.873 sys 0m2.186s 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.873 ************************************ 00:16:48.873 END TEST raid_rebuild_test_sb_io 00:16:48.873 ************************************ 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.873 20:29:42 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:48.873 20:29:42 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:48.873 20:29:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:48.873 20:29:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.873 20:29:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.873 ************************************ 00:16:48.873 START TEST raid_rebuild_test 00:16:48.873 ************************************ 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77938 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77938 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77938 ']' 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.873 20:29:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.873 [2024-11-26 20:29:42.358999] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:16:48.873 [2024-11-26 20:29:42.359162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77938 ] 00:16:48.873 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:48.873 Zero copy mechanism will not be used. 00:16:49.142 [2024-11-26 20:29:42.540421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.142 [2024-11-26 20:29:42.659282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.408 [2024-11-26 20:29:42.877274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.408 [2024-11-26 20:29:42.877350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 BaseBdev1_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 [2024-11-26 20:29:43.336601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.977 [2024-11-26 20:29:43.336676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.977 [2024-11-26 20:29:43.336700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.977 [2024-11-26 20:29:43.336714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.977 [2024-11-26 20:29:43.339183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.977 [2024-11-26 20:29:43.339228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.977 BaseBdev1 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 BaseBdev2_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 [2024-11-26 20:29:43.397681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:49.977 [2024-11-26 20:29:43.397774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.977 [2024-11-26 20:29:43.397804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.977 [2024-11-26 20:29:43.397817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.977 [2024-11-26 20:29:43.400171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.977 [2024-11-26 20:29:43.400216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.977 BaseBdev2 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 BaseBdev3_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.977 [2024-11-26 20:29:43.468774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:49.977 [2024-11-26 20:29:43.468885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.977 [2024-11-26 20:29:43.468914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.977 [2024-11-26 20:29:43.468928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.977 [2024-11-26 20:29:43.471415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.977 [2024-11-26 20:29:43.471464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:49.977 BaseBdev3 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.977 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.978 BaseBdev4_malloc 00:16:49.978 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.978 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:49.978 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.978 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.978 [2024-11-26 20:29:43.528424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:49.978 [2024-11-26 20:29:43.528519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.978 [2024-11-26 20:29:43.528549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:49.978 [2024-11-26 20:29:43.528562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.238 [2024-11-26 20:29:43.531044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.238 [2024-11-26 20:29:43.531099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:50.238 BaseBdev4 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 spare_malloc 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 spare_delay 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 [2024-11-26 20:29:43.599628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.238 [2024-11-26 20:29:43.599707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.238 [2024-11-26 20:29:43.599733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:50.238 [2024-11-26 20:29:43.599745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.238 [2024-11-26 20:29:43.602315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.238 [2024-11-26 20:29:43.602363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.238 spare 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 [2024-11-26 20:29:43.611655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.238 [2024-11-26 20:29:43.613819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.238 [2024-11-26 20:29:43.613903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.238 [2024-11-26 20:29:43.613966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.238 [2024-11-26 20:29:43.614076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.238 [2024-11-26 20:29:43.614100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:50.238 [2024-11-26 20:29:43.614476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.238 [2024-11-26 20:29:43.614707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.238 [2024-11-26 20:29:43.614732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.238 [2024-11-26 20:29:43.614950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.238 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.238 "name": "raid_bdev1", 00:16:50.238 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:16:50.238 "strip_size_kb": 0, 00:16:50.238 "state": "online", 00:16:50.238 "raid_level": "raid1", 00:16:50.238 "superblock": false, 00:16:50.238 "num_base_bdevs": 4, 00:16:50.238 "num_base_bdevs_discovered": 4, 00:16:50.238 "num_base_bdevs_operational": 4, 00:16:50.238 "base_bdevs_list": [ 00:16:50.238 { 00:16:50.238 "name": "BaseBdev1", 00:16:50.238 "uuid": "6f7a0a31-beba-58ed-a249-101fa41050a5", 00:16:50.238 "is_configured": true, 00:16:50.238 "data_offset": 0, 00:16:50.238 "data_size": 65536 00:16:50.238 }, 00:16:50.238 { 00:16:50.238 "name": "BaseBdev2", 00:16:50.238 "uuid": "6d485f7b-711b-530a-9d05-e2cda2732e7b", 00:16:50.238 "is_configured": true, 00:16:50.238 "data_offset": 0, 00:16:50.239 "data_size": 65536 00:16:50.239 }, 00:16:50.239 { 00:16:50.239 "name": "BaseBdev3", 00:16:50.239 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:16:50.239 "is_configured": true, 00:16:50.239 "data_offset": 0, 00:16:50.239 "data_size": 65536 00:16:50.239 }, 00:16:50.239 { 00:16:50.239 "name": "BaseBdev4", 00:16:50.239 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:16:50.239 "is_configured": true, 00:16:50.239 "data_offset": 0, 00:16:50.239 "data_size": 65536 00:16:50.239 } 00:16:50.239 ] 00:16:50.239 }' 00:16:50.239 20:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.239 20:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.808 [2024-11-26 20:29:44.071332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.808 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:51.067 [2024-11-26 20:29:44.386410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:51.067 /dev/nbd0 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.067 1+0 records in 00:16:51.067 1+0 records out 00:16:51.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404844 s, 10.1 MB/s 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:51.067 20:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:59.182 65536+0 records in 00:16:59.182 65536+0 records out 00:16:59.182 33554432 bytes (34 MB, 32 MiB) copied, 6.80215 s, 4.9 MB/s 00:16:59.182 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:59.182 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.182 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.182 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.182 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.183 [2024-11-26 20:29:51.506091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.183 [2024-11-26 20:29:51.522183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.183 "name": "raid_bdev1", 00:16:59.183 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:16:59.183 "strip_size_kb": 0, 00:16:59.183 "state": "online", 00:16:59.183 "raid_level": "raid1", 00:16:59.183 "superblock": false, 00:16:59.183 "num_base_bdevs": 4, 00:16:59.183 "num_base_bdevs_discovered": 3, 00:16:59.183 "num_base_bdevs_operational": 3, 00:16:59.183 "base_bdevs_list": [ 00:16:59.183 { 00:16:59.183 "name": null, 00:16:59.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.183 "is_configured": false, 00:16:59.183 "data_offset": 0, 00:16:59.183 "data_size": 65536 00:16:59.183 }, 00:16:59.183 { 00:16:59.183 "name": "BaseBdev2", 00:16:59.183 "uuid": "6d485f7b-711b-530a-9d05-e2cda2732e7b", 00:16:59.183 "is_configured": true, 00:16:59.183 "data_offset": 0, 00:16:59.183 "data_size": 65536 00:16:59.183 }, 00:16:59.183 { 00:16:59.183 "name": "BaseBdev3", 00:16:59.183 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:16:59.183 "is_configured": true, 00:16:59.183 "data_offset": 0, 00:16:59.183 "data_size": 65536 00:16:59.183 }, 00:16:59.183 { 00:16:59.183 "name": "BaseBdev4", 00:16:59.183 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:16:59.183 "is_configured": true, 00:16:59.183 "data_offset": 0, 00:16:59.183 "data_size": 65536 00:16:59.183 } 00:16:59.183 ] 00:16:59.183 }' 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.183 [2024-11-26 20:29:51.961444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.183 [2024-11-26 20:29:51.981289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.183 20:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:59.183 [2024-11-26 20:29:51.983674] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.441 20:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.700 "name": "raid_bdev1", 00:16:59.700 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:16:59.700 "strip_size_kb": 0, 00:16:59.700 "state": "online", 00:16:59.700 "raid_level": "raid1", 00:16:59.700 "superblock": false, 00:16:59.700 "num_base_bdevs": 4, 00:16:59.700 "num_base_bdevs_discovered": 4, 00:16:59.700 "num_base_bdevs_operational": 4, 00:16:59.700 "process": { 00:16:59.700 "type": "rebuild", 00:16:59.700 "target": "spare", 00:16:59.700 "progress": { 00:16:59.700 "blocks": 20480, 00:16:59.700 "percent": 31 00:16:59.700 } 00:16:59.700 }, 00:16:59.700 "base_bdevs_list": [ 00:16:59.700 { 00:16:59.700 "name": "spare", 00:16:59.700 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:16:59.700 "is_configured": true, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 65536 00:16:59.700 }, 00:16:59.700 { 00:16:59.700 "name": "BaseBdev2", 00:16:59.700 "uuid": "6d485f7b-711b-530a-9d05-e2cda2732e7b", 00:16:59.700 "is_configured": true, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 65536 00:16:59.700 }, 00:16:59.700 { 00:16:59.700 "name": "BaseBdev3", 00:16:59.700 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:16:59.700 "is_configured": true, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 65536 00:16:59.700 }, 00:16:59.700 { 00:16:59.700 "name": "BaseBdev4", 00:16:59.700 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:16:59.700 "is_configured": true, 00:16:59.700 "data_offset": 0, 00:16:59.700 "data_size": 65536 00:16:59.700 } 00:16:59.700 ] 00:16:59.700 }' 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.700 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.700 [2024-11-26 20:29:53.114668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.700 [2024-11-26 20:29:53.189667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:59.700 [2024-11-26 20:29:53.189752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.701 [2024-11-26 20:29:53.189773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.701 [2024-11-26 20:29:53.189785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.701 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.958 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.958 "name": "raid_bdev1", 00:16:59.958 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:16:59.959 "strip_size_kb": 0, 00:16:59.959 "state": "online", 00:16:59.959 "raid_level": "raid1", 00:16:59.959 "superblock": false, 00:16:59.959 "num_base_bdevs": 4, 00:16:59.959 "num_base_bdevs_discovered": 3, 00:16:59.959 "num_base_bdevs_operational": 3, 00:16:59.959 "base_bdevs_list": [ 00:16:59.959 { 00:16:59.959 "name": null, 00:16:59.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.959 "is_configured": false, 00:16:59.959 "data_offset": 0, 00:16:59.959 "data_size": 65536 00:16:59.959 }, 00:16:59.959 { 00:16:59.959 "name": "BaseBdev2", 00:16:59.959 "uuid": "6d485f7b-711b-530a-9d05-e2cda2732e7b", 00:16:59.959 "is_configured": true, 00:16:59.959 "data_offset": 0, 00:16:59.959 "data_size": 65536 00:16:59.959 }, 00:16:59.959 { 00:16:59.959 "name": "BaseBdev3", 00:16:59.959 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:16:59.959 "is_configured": true, 00:16:59.959 "data_offset": 0, 00:16:59.959 "data_size": 65536 00:16:59.959 }, 00:16:59.959 { 00:16:59.959 "name": "BaseBdev4", 00:16:59.959 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:16:59.959 "is_configured": true, 00:16:59.959 "data_offset": 0, 00:16:59.959 "data_size": 65536 00:16:59.959 } 00:16:59.959 ] 00:16:59.959 }' 00:16:59.959 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.959 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.217 "name": "raid_bdev1", 00:17:00.217 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:00.217 "strip_size_kb": 0, 00:17:00.217 "state": "online", 00:17:00.217 "raid_level": "raid1", 00:17:00.217 "superblock": false, 00:17:00.217 "num_base_bdevs": 4, 00:17:00.217 "num_base_bdevs_discovered": 3, 00:17:00.217 "num_base_bdevs_operational": 3, 00:17:00.217 "base_bdevs_list": [ 00:17:00.217 { 00:17:00.217 "name": null, 00:17:00.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.217 "is_configured": false, 00:17:00.217 "data_offset": 0, 00:17:00.217 "data_size": 65536 00:17:00.217 }, 00:17:00.217 { 00:17:00.217 "name": "BaseBdev2", 00:17:00.217 "uuid": "6d485f7b-711b-530a-9d05-e2cda2732e7b", 00:17:00.217 "is_configured": true, 00:17:00.217 "data_offset": 0, 00:17:00.217 "data_size": 65536 00:17:00.217 }, 00:17:00.217 { 00:17:00.217 "name": "BaseBdev3", 00:17:00.217 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:00.217 "is_configured": true, 00:17:00.217 "data_offset": 0, 00:17:00.217 "data_size": 65536 00:17:00.217 }, 00:17:00.217 { 00:17:00.217 "name": "BaseBdev4", 00:17:00.217 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:00.217 "is_configured": true, 00:17:00.217 "data_offset": 0, 00:17:00.217 "data_size": 65536 00:17:00.217 } 00:17:00.217 ] 00:17:00.217 }' 00:17:00.217 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.476 [2024-11-26 20:29:53.839793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.476 [2024-11-26 20:29:53.857123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.476 20:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:00.476 [2024-11-26 20:29:53.859330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.411 "name": "raid_bdev1", 00:17:01.411 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:01.411 "strip_size_kb": 0, 00:17:01.411 "state": "online", 00:17:01.411 "raid_level": "raid1", 00:17:01.411 "superblock": false, 00:17:01.411 "num_base_bdevs": 4, 00:17:01.411 "num_base_bdevs_discovered": 4, 00:17:01.411 "num_base_bdevs_operational": 4, 00:17:01.411 "process": { 00:17:01.411 "type": "rebuild", 00:17:01.411 "target": "spare", 00:17:01.411 "progress": { 00:17:01.411 "blocks": 20480, 00:17:01.411 "percent": 31 00:17:01.411 } 00:17:01.411 }, 00:17:01.411 "base_bdevs_list": [ 00:17:01.411 { 00:17:01.411 "name": "spare", 00:17:01.411 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:01.411 "is_configured": true, 00:17:01.411 "data_offset": 0, 00:17:01.411 "data_size": 65536 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "name": "BaseBdev2", 00:17:01.411 "uuid": "6d485f7b-711b-530a-9d05-e2cda2732e7b", 00:17:01.411 "is_configured": true, 00:17:01.411 "data_offset": 0, 00:17:01.411 "data_size": 65536 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "name": "BaseBdev3", 00:17:01.411 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:01.411 "is_configured": true, 00:17:01.411 "data_offset": 0, 00:17:01.411 "data_size": 65536 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "name": "BaseBdev4", 00:17:01.411 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:01.411 "is_configured": true, 00:17:01.411 "data_offset": 0, 00:17:01.411 "data_size": 65536 00:17:01.411 } 00:17:01.411 ] 00:17:01.411 }' 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.411 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.669 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.669 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:01.670 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:01.670 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:01.670 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:01.670 20:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:01.670 20:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.670 20:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 [2024-11-26 20:29:54.990337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:01.670 [2024-11-26 20:29:55.065381] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.670 "name": "raid_bdev1", 00:17:01.670 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:01.670 "strip_size_kb": 0, 00:17:01.670 "state": "online", 00:17:01.670 "raid_level": "raid1", 00:17:01.670 "superblock": false, 00:17:01.670 "num_base_bdevs": 4, 00:17:01.670 "num_base_bdevs_discovered": 3, 00:17:01.670 "num_base_bdevs_operational": 3, 00:17:01.670 "process": { 00:17:01.670 "type": "rebuild", 00:17:01.670 "target": "spare", 00:17:01.670 "progress": { 00:17:01.670 "blocks": 24576, 00:17:01.670 "percent": 37 00:17:01.670 } 00:17:01.670 }, 00:17:01.670 "base_bdevs_list": [ 00:17:01.670 { 00:17:01.670 "name": "spare", 00:17:01.670 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:01.670 "is_configured": true, 00:17:01.670 "data_offset": 0, 00:17:01.670 "data_size": 65536 00:17:01.670 }, 00:17:01.670 { 00:17:01.670 "name": null, 00:17:01.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.670 "is_configured": false, 00:17:01.670 "data_offset": 0, 00:17:01.670 "data_size": 65536 00:17:01.670 }, 00:17:01.670 { 00:17:01.670 "name": "BaseBdev3", 00:17:01.670 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:01.670 "is_configured": true, 00:17:01.670 "data_offset": 0, 00:17:01.670 "data_size": 65536 00:17:01.670 }, 00:17:01.670 { 00:17:01.670 "name": "BaseBdev4", 00:17:01.670 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:01.670 "is_configured": true, 00:17:01.670 "data_offset": 0, 00:17:01.670 "data_size": 65536 00:17:01.670 } 00:17:01.670 ] 00:17:01.670 }' 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 20:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.929 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.929 "name": "raid_bdev1", 00:17:01.929 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:01.929 "strip_size_kb": 0, 00:17:01.929 "state": "online", 00:17:01.929 "raid_level": "raid1", 00:17:01.929 "superblock": false, 00:17:01.929 "num_base_bdevs": 4, 00:17:01.929 "num_base_bdevs_discovered": 3, 00:17:01.929 "num_base_bdevs_operational": 3, 00:17:01.929 "process": { 00:17:01.929 "type": "rebuild", 00:17:01.929 "target": "spare", 00:17:01.929 "progress": { 00:17:01.929 "blocks": 26624, 00:17:01.929 "percent": 40 00:17:01.929 } 00:17:01.929 }, 00:17:01.929 "base_bdevs_list": [ 00:17:01.929 { 00:17:01.929 "name": "spare", 00:17:01.929 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:01.929 "is_configured": true, 00:17:01.929 "data_offset": 0, 00:17:01.929 "data_size": 65536 00:17:01.929 }, 00:17:01.929 { 00:17:01.929 "name": null, 00:17:01.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.929 "is_configured": false, 00:17:01.929 "data_offset": 0, 00:17:01.929 "data_size": 65536 00:17:01.929 }, 00:17:01.929 { 00:17:01.929 "name": "BaseBdev3", 00:17:01.929 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:01.929 "is_configured": true, 00:17:01.929 "data_offset": 0, 00:17:01.929 "data_size": 65536 00:17:01.929 }, 00:17:01.929 { 00:17:01.929 "name": "BaseBdev4", 00:17:01.929 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:01.929 "is_configured": true, 00:17:01.929 "data_offset": 0, 00:17:01.929 "data_size": 65536 00:17:01.929 } 00:17:01.929 ] 00:17:01.929 }' 00:17:01.929 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.929 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.929 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.929 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.929 20:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.931 "name": "raid_bdev1", 00:17:02.931 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:02.931 "strip_size_kb": 0, 00:17:02.931 "state": "online", 00:17:02.931 "raid_level": "raid1", 00:17:02.931 "superblock": false, 00:17:02.931 "num_base_bdevs": 4, 00:17:02.931 "num_base_bdevs_discovered": 3, 00:17:02.931 "num_base_bdevs_operational": 3, 00:17:02.931 "process": { 00:17:02.931 "type": "rebuild", 00:17:02.931 "target": "spare", 00:17:02.931 "progress": { 00:17:02.931 "blocks": 51200, 00:17:02.931 "percent": 78 00:17:02.931 } 00:17:02.931 }, 00:17:02.931 "base_bdevs_list": [ 00:17:02.931 { 00:17:02.931 "name": "spare", 00:17:02.931 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:02.931 "is_configured": true, 00:17:02.931 "data_offset": 0, 00:17:02.931 "data_size": 65536 00:17:02.931 }, 00:17:02.931 { 00:17:02.931 "name": null, 00:17:02.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.931 "is_configured": false, 00:17:02.931 "data_offset": 0, 00:17:02.931 "data_size": 65536 00:17:02.931 }, 00:17:02.931 { 00:17:02.931 "name": "BaseBdev3", 00:17:02.931 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:02.931 "is_configured": true, 00:17:02.931 "data_offset": 0, 00:17:02.931 "data_size": 65536 00:17:02.931 }, 00:17:02.931 { 00:17:02.931 "name": "BaseBdev4", 00:17:02.931 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:02.931 "is_configured": true, 00:17:02.931 "data_offset": 0, 00:17:02.931 "data_size": 65536 00:17:02.931 } 00:17:02.931 ] 00:17:02.931 }' 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.931 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.190 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.190 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.190 20:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.757 [2024-11-26 20:29:57.075468] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:03.757 [2024-11-26 20:29:57.075592] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:03.757 [2024-11-26 20:29:57.075649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.016 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.275 "name": "raid_bdev1", 00:17:04.275 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:04.275 "strip_size_kb": 0, 00:17:04.275 "state": "online", 00:17:04.275 "raid_level": "raid1", 00:17:04.275 "superblock": false, 00:17:04.275 "num_base_bdevs": 4, 00:17:04.275 "num_base_bdevs_discovered": 3, 00:17:04.275 "num_base_bdevs_operational": 3, 00:17:04.275 "base_bdevs_list": [ 00:17:04.275 { 00:17:04.275 "name": "spare", 00:17:04.275 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": null, 00:17:04.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.275 "is_configured": false, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": "BaseBdev3", 00:17:04.275 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": "BaseBdev4", 00:17:04.275 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 } 00:17:04.275 ] 00:17:04.275 }' 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.275 "name": "raid_bdev1", 00:17:04.275 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:04.275 "strip_size_kb": 0, 00:17:04.275 "state": "online", 00:17:04.275 "raid_level": "raid1", 00:17:04.275 "superblock": false, 00:17:04.275 "num_base_bdevs": 4, 00:17:04.275 "num_base_bdevs_discovered": 3, 00:17:04.275 "num_base_bdevs_operational": 3, 00:17:04.275 "base_bdevs_list": [ 00:17:04.275 { 00:17:04.275 "name": "spare", 00:17:04.275 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": null, 00:17:04.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.275 "is_configured": false, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": "BaseBdev3", 00:17:04.275 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": "BaseBdev4", 00:17:04.275 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 0, 00:17:04.275 "data_size": 65536 00:17:04.275 } 00:17:04.275 ] 00:17:04.275 }' 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.275 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.534 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.534 "name": "raid_bdev1", 00:17:04.534 "uuid": "afa844d0-d410-48f9-a0b8-49ede0698b6d", 00:17:04.534 "strip_size_kb": 0, 00:17:04.534 "state": "online", 00:17:04.534 "raid_level": "raid1", 00:17:04.534 "superblock": false, 00:17:04.534 "num_base_bdevs": 4, 00:17:04.534 "num_base_bdevs_discovered": 3, 00:17:04.534 "num_base_bdevs_operational": 3, 00:17:04.534 "base_bdevs_list": [ 00:17:04.534 { 00:17:04.534 "name": "spare", 00:17:04.534 "uuid": "b66ec83f-5c77-5c94-a393-53e8cbac16c4", 00:17:04.534 "is_configured": true, 00:17:04.534 "data_offset": 0, 00:17:04.534 "data_size": 65536 00:17:04.534 }, 00:17:04.534 { 00:17:04.534 "name": null, 00:17:04.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.534 "is_configured": false, 00:17:04.534 "data_offset": 0, 00:17:04.534 "data_size": 65536 00:17:04.535 }, 00:17:04.535 { 00:17:04.535 "name": "BaseBdev3", 00:17:04.535 "uuid": "2d8c1f3a-7142-51c8-bc93-7c54869a2a16", 00:17:04.535 "is_configured": true, 00:17:04.535 "data_offset": 0, 00:17:04.535 "data_size": 65536 00:17:04.535 }, 00:17:04.535 { 00:17:04.535 "name": "BaseBdev4", 00:17:04.535 "uuid": "105aea39-24a3-56cc-8ab9-93affa6b1259", 00:17:04.535 "is_configured": true, 00:17:04.535 "data_offset": 0, 00:17:04.535 "data_size": 65536 00:17:04.535 } 00:17:04.535 ] 00:17:04.535 }' 00:17:04.535 20:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.535 20:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.794 [2024-11-26 20:29:58.288107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.794 [2024-11-26 20:29:58.288145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.794 [2024-11-26 20:29:58.288268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.794 [2024-11-26 20:29:58.288372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.794 [2024-11-26 20:29:58.288389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.794 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:05.053 /dev/nbd0 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.053 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.313 1+0 records in 00:17:05.313 1+0 records out 00:17:05.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258441 s, 15.8 MB/s 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:05.313 /dev/nbd1 00:17:05.313 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.570 1+0 records in 00:17:05.570 1+0 records out 00:17:05.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423469 s, 9.7 MB/s 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.570 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:05.571 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.571 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.571 20:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:05.571 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.571 20:29:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.571 20:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.571 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.827 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77938 00:17:06.083 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77938 ']' 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77938 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77938 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.084 killing process with pid 77938 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77938' 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77938 00:17:06.084 Received shutdown signal, test time was about 60.000000 seconds 00:17:06.084 00:17:06.084 Latency(us) 00:17:06.084 [2024-11-26T20:29:59.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.084 [2024-11-26T20:29:59.639Z] =================================================================================================================== 00:17:06.084 [2024-11-26T20:29:59.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:06.084 20:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77938 00:17:06.084 [2024-11-26 20:29:59.626794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:06.703 [2024-11-26 20:30:00.214457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.075 20:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:08.075 00:17:08.075 real 0m19.301s 00:17:08.075 user 0m21.622s 00:17:08.075 sys 0m3.487s 00:17:08.075 20:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.075 20:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.075 ************************************ 00:17:08.075 END TEST raid_rebuild_test 00:17:08.075 ************************************ 00:17:08.075 20:30:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:08.075 20:30:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:08.075 20:30:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.076 20:30:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.076 ************************************ 00:17:08.076 START TEST raid_rebuild_test_sb 00:17:08.076 ************************************ 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78407 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78407 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78407 ']' 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.076 20:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.334 [2024-11-26 20:30:01.705942] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:17:08.334 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:08.334 Zero copy mechanism will not be used. 00:17:08.334 [2024-11-26 20:30:01.706330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78407 ] 00:17:08.592 [2024-11-26 20:30:01.897976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.592 [2024-11-26 20:30:02.030883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.850 [2024-11-26 20:30:02.273120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.850 [2024-11-26 20:30:02.273169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.107 BaseBdev1_malloc 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.107 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.107 [2024-11-26 20:30:02.651854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:09.108 [2024-11-26 20:30:02.651926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.108 [2024-11-26 20:30:02.651952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:09.108 [2024-11-26 20:30:02.651965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.108 [2024-11-26 20:30:02.654429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.108 [2024-11-26 20:30:02.654480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:09.108 BaseBdev1 00:17:09.108 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.108 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.108 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:09.108 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.108 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.366 BaseBdev2_malloc 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.366 [2024-11-26 20:30:02.714416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:09.366 [2024-11-26 20:30:02.714490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.366 [2024-11-26 20:30:02.714518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:09.366 [2024-11-26 20:30:02.714532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.366 [2024-11-26 20:30:02.716967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.366 [2024-11-26 20:30:02.717011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:09.366 BaseBdev2 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.366 BaseBdev3_malloc 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.366 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.366 [2024-11-26 20:30:02.791796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:09.366 [2024-11-26 20:30:02.791858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.366 [2024-11-26 20:30:02.791884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:09.366 [2024-11-26 20:30:02.791897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.367 [2024-11-26 20:30:02.794323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.367 [2024-11-26 20:30:02.794365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:09.367 BaseBdev3 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.367 BaseBdev4_malloc 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.367 [2024-11-26 20:30:02.853913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:09.367 [2024-11-26 20:30:02.853981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.367 [2024-11-26 20:30:02.854006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:09.367 [2024-11-26 20:30:02.854020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.367 [2024-11-26 20:30:02.856384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.367 [2024-11-26 20:30:02.856429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:09.367 BaseBdev4 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.367 spare_malloc 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.367 spare_delay 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.367 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.625 [2024-11-26 20:30:02.921760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:09.625 [2024-11-26 20:30:02.921816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.625 [2024-11-26 20:30:02.921837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:09.625 [2024-11-26 20:30:02.921849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.625 [2024-11-26 20:30:02.924213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.625 [2024-11-26 20:30:02.924265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:09.625 spare 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.625 [2024-11-26 20:30:02.933793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.625 [2024-11-26 20:30:02.935845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.625 [2024-11-26 20:30:02.935919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.625 [2024-11-26 20:30:02.935976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.625 [2024-11-26 20:30:02.936204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:09.625 [2024-11-26 20:30:02.936229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:09.625 [2024-11-26 20:30:02.936527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:09.625 [2024-11-26 20:30:02.936763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:09.625 [2024-11-26 20:30:02.936789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:09.625 [2024-11-26 20:30:02.936998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.625 "name": "raid_bdev1", 00:17:09.625 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:09.625 "strip_size_kb": 0, 00:17:09.625 "state": "online", 00:17:09.625 "raid_level": "raid1", 00:17:09.625 "superblock": true, 00:17:09.625 "num_base_bdevs": 4, 00:17:09.625 "num_base_bdevs_discovered": 4, 00:17:09.625 "num_base_bdevs_operational": 4, 00:17:09.625 "base_bdevs_list": [ 00:17:09.625 { 00:17:09.625 "name": "BaseBdev1", 00:17:09.625 "uuid": "c50a6441-0709-51e0-95a0-2cf9238200dd", 00:17:09.625 "is_configured": true, 00:17:09.625 "data_offset": 2048, 00:17:09.625 "data_size": 63488 00:17:09.625 }, 00:17:09.625 { 00:17:09.625 "name": "BaseBdev2", 00:17:09.625 "uuid": "c6d51786-7c28-594b-8913-86d7f5ee5bb7", 00:17:09.625 "is_configured": true, 00:17:09.625 "data_offset": 2048, 00:17:09.625 "data_size": 63488 00:17:09.625 }, 00:17:09.625 { 00:17:09.625 "name": "BaseBdev3", 00:17:09.625 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:09.625 "is_configured": true, 00:17:09.625 "data_offset": 2048, 00:17:09.625 "data_size": 63488 00:17:09.625 }, 00:17:09.625 { 00:17:09.625 "name": "BaseBdev4", 00:17:09.625 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:09.625 "is_configured": true, 00:17:09.625 "data_offset": 2048, 00:17:09.625 "data_size": 63488 00:17:09.625 } 00:17:09.625 ] 00:17:09.625 }' 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.625 20:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.882 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.882 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.882 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:09.882 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.139 [2024-11-26 20:30:03.437388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.139 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.139 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.140 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:10.396 [2024-11-26 20:30:03.812422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:10.396 /dev/nbd0 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.396 1+0 records in 00:17:10.396 1+0 records out 00:17:10.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504118 s, 8.1 MB/s 00:17:10.396 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:10.397 20:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:16.958 63488+0 records in 00:17:16.958 63488+0 records out 00:17:16.958 32505856 bytes (33 MB, 31 MiB) copied, 6.23629 s, 5.2 MB/s 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.958 [2024-11-26 20:30:10.356555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.958 [2024-11-26 20:30:10.388612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.958 "name": "raid_bdev1", 00:17:16.958 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:16.958 "strip_size_kb": 0, 00:17:16.958 "state": "online", 00:17:16.958 "raid_level": "raid1", 00:17:16.958 "superblock": true, 00:17:16.958 "num_base_bdevs": 4, 00:17:16.958 "num_base_bdevs_discovered": 3, 00:17:16.958 "num_base_bdevs_operational": 3, 00:17:16.958 "base_bdevs_list": [ 00:17:16.958 { 00:17:16.958 "name": null, 00:17:16.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.958 "is_configured": false, 00:17:16.958 "data_offset": 0, 00:17:16.958 "data_size": 63488 00:17:16.958 }, 00:17:16.958 { 00:17:16.958 "name": "BaseBdev2", 00:17:16.958 "uuid": "c6d51786-7c28-594b-8913-86d7f5ee5bb7", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 2048, 00:17:16.958 "data_size": 63488 00:17:16.958 }, 00:17:16.958 { 00:17:16.958 "name": "BaseBdev3", 00:17:16.958 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 2048, 00:17:16.958 "data_size": 63488 00:17:16.958 }, 00:17:16.958 { 00:17:16.958 "name": "BaseBdev4", 00:17:16.958 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 2048, 00:17:16.958 "data_size": 63488 00:17:16.958 } 00:17:16.958 ] 00:17:16.958 }' 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.958 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.524 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.524 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.524 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.524 [2024-11-26 20:30:10.867877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.524 [2024-11-26 20:30:10.885450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:17.524 20:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.524 20:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:17.524 [2024-11-26 20:30:10.887931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.470 "name": "raid_bdev1", 00:17:18.470 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:18.470 "strip_size_kb": 0, 00:17:18.470 "state": "online", 00:17:18.470 "raid_level": "raid1", 00:17:18.470 "superblock": true, 00:17:18.470 "num_base_bdevs": 4, 00:17:18.470 "num_base_bdevs_discovered": 4, 00:17:18.470 "num_base_bdevs_operational": 4, 00:17:18.470 "process": { 00:17:18.470 "type": "rebuild", 00:17:18.470 "target": "spare", 00:17:18.470 "progress": { 00:17:18.470 "blocks": 20480, 00:17:18.470 "percent": 32 00:17:18.470 } 00:17:18.470 }, 00:17:18.470 "base_bdevs_list": [ 00:17:18.470 { 00:17:18.470 "name": "spare", 00:17:18.470 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:18.470 "is_configured": true, 00:17:18.470 "data_offset": 2048, 00:17:18.470 "data_size": 63488 00:17:18.470 }, 00:17:18.470 { 00:17:18.470 "name": "BaseBdev2", 00:17:18.470 "uuid": "c6d51786-7c28-594b-8913-86d7f5ee5bb7", 00:17:18.470 "is_configured": true, 00:17:18.470 "data_offset": 2048, 00:17:18.470 "data_size": 63488 00:17:18.470 }, 00:17:18.470 { 00:17:18.470 "name": "BaseBdev3", 00:17:18.470 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:18.470 "is_configured": true, 00:17:18.470 "data_offset": 2048, 00:17:18.470 "data_size": 63488 00:17:18.470 }, 00:17:18.470 { 00:17:18.470 "name": "BaseBdev4", 00:17:18.470 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:18.470 "is_configured": true, 00:17:18.470 "data_offset": 2048, 00:17:18.470 "data_size": 63488 00:17:18.470 } 00:17:18.470 ] 00:17:18.470 }' 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.470 20:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.729 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.729 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.729 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.729 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.729 [2024-11-26 20:30:12.035475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.729 [2024-11-26 20:30:12.099348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.729 [2024-11-26 20:30:12.099460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.729 [2024-11-26 20:30:12.099483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.730 [2024-11-26 20:30:12.099497] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.730 "name": "raid_bdev1", 00:17:18.730 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:18.730 "strip_size_kb": 0, 00:17:18.730 "state": "online", 00:17:18.730 "raid_level": "raid1", 00:17:18.730 "superblock": true, 00:17:18.730 "num_base_bdevs": 4, 00:17:18.730 "num_base_bdevs_discovered": 3, 00:17:18.730 "num_base_bdevs_operational": 3, 00:17:18.730 "base_bdevs_list": [ 00:17:18.730 { 00:17:18.730 "name": null, 00:17:18.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.730 "is_configured": false, 00:17:18.730 "data_offset": 0, 00:17:18.730 "data_size": 63488 00:17:18.730 }, 00:17:18.730 { 00:17:18.730 "name": "BaseBdev2", 00:17:18.730 "uuid": "c6d51786-7c28-594b-8913-86d7f5ee5bb7", 00:17:18.730 "is_configured": true, 00:17:18.730 "data_offset": 2048, 00:17:18.730 "data_size": 63488 00:17:18.730 }, 00:17:18.730 { 00:17:18.730 "name": "BaseBdev3", 00:17:18.730 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:18.730 "is_configured": true, 00:17:18.730 "data_offset": 2048, 00:17:18.730 "data_size": 63488 00:17:18.730 }, 00:17:18.730 { 00:17:18.730 "name": "BaseBdev4", 00:17:18.730 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:18.730 "is_configured": true, 00:17:18.730 "data_offset": 2048, 00:17:18.730 "data_size": 63488 00:17:18.730 } 00:17:18.730 ] 00:17:18.730 }' 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.730 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.299 "name": "raid_bdev1", 00:17:19.299 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:19.299 "strip_size_kb": 0, 00:17:19.299 "state": "online", 00:17:19.299 "raid_level": "raid1", 00:17:19.299 "superblock": true, 00:17:19.299 "num_base_bdevs": 4, 00:17:19.299 "num_base_bdevs_discovered": 3, 00:17:19.299 "num_base_bdevs_operational": 3, 00:17:19.299 "base_bdevs_list": [ 00:17:19.299 { 00:17:19.299 "name": null, 00:17:19.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.299 "is_configured": false, 00:17:19.299 "data_offset": 0, 00:17:19.299 "data_size": 63488 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "name": "BaseBdev2", 00:17:19.299 "uuid": "c6d51786-7c28-594b-8913-86d7f5ee5bb7", 00:17:19.299 "is_configured": true, 00:17:19.299 "data_offset": 2048, 00:17:19.299 "data_size": 63488 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "name": "BaseBdev3", 00:17:19.299 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:19.299 "is_configured": true, 00:17:19.299 "data_offset": 2048, 00:17:19.299 "data_size": 63488 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "name": "BaseBdev4", 00:17:19.299 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:19.299 "is_configured": true, 00:17:19.299 "data_offset": 2048, 00:17:19.299 "data_size": 63488 00:17:19.299 } 00:17:19.299 ] 00:17:19.299 }' 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.299 [2024-11-26 20:30:12.726915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.299 [2024-11-26 20:30:12.745274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.299 20:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:19.299 [2024-11-26 20:30:12.747949] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.237 20:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.496 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.496 "name": "raid_bdev1", 00:17:20.496 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:20.496 "strip_size_kb": 0, 00:17:20.496 "state": "online", 00:17:20.496 "raid_level": "raid1", 00:17:20.496 "superblock": true, 00:17:20.496 "num_base_bdevs": 4, 00:17:20.496 "num_base_bdevs_discovered": 4, 00:17:20.496 "num_base_bdevs_operational": 4, 00:17:20.496 "process": { 00:17:20.496 "type": "rebuild", 00:17:20.496 "target": "spare", 00:17:20.496 "progress": { 00:17:20.496 "blocks": 20480, 00:17:20.496 "percent": 32 00:17:20.496 } 00:17:20.496 }, 00:17:20.496 "base_bdevs_list": [ 00:17:20.496 { 00:17:20.496 "name": "spare", 00:17:20.496 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:20.496 "is_configured": true, 00:17:20.496 "data_offset": 2048, 00:17:20.496 "data_size": 63488 00:17:20.496 }, 00:17:20.496 { 00:17:20.496 "name": "BaseBdev2", 00:17:20.496 "uuid": "c6d51786-7c28-594b-8913-86d7f5ee5bb7", 00:17:20.496 "is_configured": true, 00:17:20.496 "data_offset": 2048, 00:17:20.496 "data_size": 63488 00:17:20.496 }, 00:17:20.496 { 00:17:20.496 "name": "BaseBdev3", 00:17:20.496 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:20.496 "is_configured": true, 00:17:20.496 "data_offset": 2048, 00:17:20.496 "data_size": 63488 00:17:20.496 }, 00:17:20.497 { 00:17:20.497 "name": "BaseBdev4", 00:17:20.497 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:20.497 "is_configured": true, 00:17:20.497 "data_offset": 2048, 00:17:20.497 "data_size": 63488 00:17:20.497 } 00:17:20.497 ] 00:17:20.497 }' 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:20.497 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.497 20:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.497 [2024-11-26 20:30:13.915370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.756 [2024-11-26 20:30:14.059201] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.756 "name": "raid_bdev1", 00:17:20.756 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:20.756 "strip_size_kb": 0, 00:17:20.756 "state": "online", 00:17:20.756 "raid_level": "raid1", 00:17:20.756 "superblock": true, 00:17:20.756 "num_base_bdevs": 4, 00:17:20.756 "num_base_bdevs_discovered": 3, 00:17:20.756 "num_base_bdevs_operational": 3, 00:17:20.756 "process": { 00:17:20.756 "type": "rebuild", 00:17:20.756 "target": "spare", 00:17:20.756 "progress": { 00:17:20.756 "blocks": 24576, 00:17:20.756 "percent": 38 00:17:20.756 } 00:17:20.756 }, 00:17:20.756 "base_bdevs_list": [ 00:17:20.756 { 00:17:20.756 "name": "spare", 00:17:20.756 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:20.756 "is_configured": true, 00:17:20.756 "data_offset": 2048, 00:17:20.756 "data_size": 63488 00:17:20.756 }, 00:17:20.756 { 00:17:20.756 "name": null, 00:17:20.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.756 "is_configured": false, 00:17:20.756 "data_offset": 0, 00:17:20.756 "data_size": 63488 00:17:20.756 }, 00:17:20.756 { 00:17:20.756 "name": "BaseBdev3", 00:17:20.756 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:20.756 "is_configured": true, 00:17:20.756 "data_offset": 2048, 00:17:20.756 "data_size": 63488 00:17:20.756 }, 00:17:20.756 { 00:17:20.756 "name": "BaseBdev4", 00:17:20.756 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:20.756 "is_configured": true, 00:17:20.756 "data_offset": 2048, 00:17:20.756 "data_size": 63488 00:17:20.756 } 00:17:20.756 ] 00:17:20.756 }' 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=487 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.756 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.757 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.757 20:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.757 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.757 "name": "raid_bdev1", 00:17:20.757 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:20.757 "strip_size_kb": 0, 00:17:20.757 "state": "online", 00:17:20.757 "raid_level": "raid1", 00:17:20.757 "superblock": true, 00:17:20.757 "num_base_bdevs": 4, 00:17:20.757 "num_base_bdevs_discovered": 3, 00:17:20.757 "num_base_bdevs_operational": 3, 00:17:20.757 "process": { 00:17:20.757 "type": "rebuild", 00:17:20.757 "target": "spare", 00:17:20.757 "progress": { 00:17:20.757 "blocks": 26624, 00:17:20.757 "percent": 41 00:17:20.757 } 00:17:20.757 }, 00:17:20.757 "base_bdevs_list": [ 00:17:20.757 { 00:17:20.757 "name": "spare", 00:17:20.757 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:20.757 "is_configured": true, 00:17:20.757 "data_offset": 2048, 00:17:20.757 "data_size": 63488 00:17:20.757 }, 00:17:20.757 { 00:17:20.757 "name": null, 00:17:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.757 "is_configured": false, 00:17:20.757 "data_offset": 0, 00:17:20.757 "data_size": 63488 00:17:20.757 }, 00:17:20.757 { 00:17:20.757 "name": "BaseBdev3", 00:17:20.757 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:20.757 "is_configured": true, 00:17:20.757 "data_offset": 2048, 00:17:20.757 "data_size": 63488 00:17:20.757 }, 00:17:20.757 { 00:17:20.757 "name": "BaseBdev4", 00:17:20.757 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:20.757 "is_configured": true, 00:17:20.757 "data_offset": 2048, 00:17:20.757 "data_size": 63488 00:17:20.757 } 00:17:20.757 ] 00:17:20.757 }' 00:17:20.757 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.016 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.016 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.016 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.016 20:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.955 "name": "raid_bdev1", 00:17:21.955 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:21.955 "strip_size_kb": 0, 00:17:21.955 "state": "online", 00:17:21.955 "raid_level": "raid1", 00:17:21.955 "superblock": true, 00:17:21.955 "num_base_bdevs": 4, 00:17:21.955 "num_base_bdevs_discovered": 3, 00:17:21.955 "num_base_bdevs_operational": 3, 00:17:21.955 "process": { 00:17:21.955 "type": "rebuild", 00:17:21.955 "target": "spare", 00:17:21.955 "progress": { 00:17:21.955 "blocks": 51200, 00:17:21.955 "percent": 80 00:17:21.955 } 00:17:21.955 }, 00:17:21.955 "base_bdevs_list": [ 00:17:21.955 { 00:17:21.955 "name": "spare", 00:17:21.955 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:21.955 "is_configured": true, 00:17:21.955 "data_offset": 2048, 00:17:21.955 "data_size": 63488 00:17:21.955 }, 00:17:21.955 { 00:17:21.955 "name": null, 00:17:21.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.955 "is_configured": false, 00:17:21.955 "data_offset": 0, 00:17:21.955 "data_size": 63488 00:17:21.955 }, 00:17:21.955 { 00:17:21.955 "name": "BaseBdev3", 00:17:21.955 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:21.955 "is_configured": true, 00:17:21.955 "data_offset": 2048, 00:17:21.955 "data_size": 63488 00:17:21.955 }, 00:17:21.955 { 00:17:21.955 "name": "BaseBdev4", 00:17:21.955 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:21.955 "is_configured": true, 00:17:21.955 "data_offset": 2048, 00:17:21.955 "data_size": 63488 00:17:21.955 } 00:17:21.955 ] 00:17:21.955 }' 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.955 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.215 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.215 20:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.474 [2024-11-26 20:30:15.976986] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:22.474 [2024-11-26 20:30:15.977103] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:22.474 [2024-11-26 20:30:15.977309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.042 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.042 "name": "raid_bdev1", 00:17:23.042 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:23.042 "strip_size_kb": 0, 00:17:23.042 "state": "online", 00:17:23.042 "raid_level": "raid1", 00:17:23.042 "superblock": true, 00:17:23.042 "num_base_bdevs": 4, 00:17:23.042 "num_base_bdevs_discovered": 3, 00:17:23.042 "num_base_bdevs_operational": 3, 00:17:23.042 "base_bdevs_list": [ 00:17:23.042 { 00:17:23.042 "name": "spare", 00:17:23.042 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:23.042 "is_configured": true, 00:17:23.042 "data_offset": 2048, 00:17:23.042 "data_size": 63488 00:17:23.042 }, 00:17:23.042 { 00:17:23.042 "name": null, 00:17:23.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.042 "is_configured": false, 00:17:23.042 "data_offset": 0, 00:17:23.042 "data_size": 63488 00:17:23.042 }, 00:17:23.042 { 00:17:23.042 "name": "BaseBdev3", 00:17:23.042 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:23.042 "is_configured": true, 00:17:23.042 "data_offset": 2048, 00:17:23.042 "data_size": 63488 00:17:23.042 }, 00:17:23.042 { 00:17:23.042 "name": "BaseBdev4", 00:17:23.042 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:23.042 "is_configured": true, 00:17:23.042 "data_offset": 2048, 00:17:23.042 "data_size": 63488 00:17:23.042 } 00:17:23.042 ] 00:17:23.042 }' 00:17:23.043 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.325 "name": "raid_bdev1", 00:17:23.325 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:23.325 "strip_size_kb": 0, 00:17:23.325 "state": "online", 00:17:23.325 "raid_level": "raid1", 00:17:23.325 "superblock": true, 00:17:23.325 "num_base_bdevs": 4, 00:17:23.325 "num_base_bdevs_discovered": 3, 00:17:23.325 "num_base_bdevs_operational": 3, 00:17:23.325 "base_bdevs_list": [ 00:17:23.325 { 00:17:23.325 "name": "spare", 00:17:23.325 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:23.325 "is_configured": true, 00:17:23.325 "data_offset": 2048, 00:17:23.325 "data_size": 63488 00:17:23.325 }, 00:17:23.325 { 00:17:23.325 "name": null, 00:17:23.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.325 "is_configured": false, 00:17:23.325 "data_offset": 0, 00:17:23.325 "data_size": 63488 00:17:23.325 }, 00:17:23.325 { 00:17:23.325 "name": "BaseBdev3", 00:17:23.325 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:23.325 "is_configured": true, 00:17:23.325 "data_offset": 2048, 00:17:23.325 "data_size": 63488 00:17:23.325 }, 00:17:23.325 { 00:17:23.325 "name": "BaseBdev4", 00:17:23.325 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:23.325 "is_configured": true, 00:17:23.325 "data_offset": 2048, 00:17:23.325 "data_size": 63488 00:17:23.325 } 00:17:23.325 ] 00:17:23.325 }' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.325 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.602 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.602 "name": "raid_bdev1", 00:17:23.602 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:23.602 "strip_size_kb": 0, 00:17:23.602 "state": "online", 00:17:23.602 "raid_level": "raid1", 00:17:23.602 "superblock": true, 00:17:23.602 "num_base_bdevs": 4, 00:17:23.602 "num_base_bdevs_discovered": 3, 00:17:23.602 "num_base_bdevs_operational": 3, 00:17:23.602 "base_bdevs_list": [ 00:17:23.602 { 00:17:23.602 "name": "spare", 00:17:23.602 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:23.602 "is_configured": true, 00:17:23.602 "data_offset": 2048, 00:17:23.602 "data_size": 63488 00:17:23.602 }, 00:17:23.602 { 00:17:23.602 "name": null, 00:17:23.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.602 "is_configured": false, 00:17:23.602 "data_offset": 0, 00:17:23.602 "data_size": 63488 00:17:23.602 }, 00:17:23.602 { 00:17:23.602 "name": "BaseBdev3", 00:17:23.602 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:23.602 "is_configured": true, 00:17:23.602 "data_offset": 2048, 00:17:23.602 "data_size": 63488 00:17:23.602 }, 00:17:23.602 { 00:17:23.602 "name": "BaseBdev4", 00:17:23.602 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:23.602 "is_configured": true, 00:17:23.602 "data_offset": 2048, 00:17:23.602 "data_size": 63488 00:17:23.602 } 00:17:23.602 ] 00:17:23.602 }' 00:17:23.602 20:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.602 20:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.862 [2024-11-26 20:30:17.241375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.862 [2024-11-26 20:30:17.241429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.862 [2024-11-26 20:30:17.241556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.862 [2024-11-26 20:30:17.241662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.862 [2024-11-26 20:30:17.241675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.862 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:24.121 /dev/nbd0 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.121 1+0 records in 00:17:24.121 1+0 records out 00:17:24.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468746 s, 8.7 MB/s 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.121 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:24.122 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.122 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.122 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:24.122 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.122 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.122 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:24.381 /dev/nbd1 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.381 1+0 records in 00:17:24.381 1+0 records out 00:17:24.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501237 s, 8.2 MB/s 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.381 20:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:24.640 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:24.640 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.640 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:24.641 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.641 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:24.641 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.641 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.898 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.157 [2024-11-26 20:30:18.605769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.157 [2024-11-26 20:30:18.605873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.157 [2024-11-26 20:30:18.605906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:25.157 [2024-11-26 20:30:18.605920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.157 [2024-11-26 20:30:18.609002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.157 [2024-11-26 20:30:18.609058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.157 [2024-11-26 20:30:18.609198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:25.157 [2024-11-26 20:30:18.609302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.157 [2024-11-26 20:30:18.609478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.157 [2024-11-26 20:30:18.609595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.157 spare 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.157 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.157 [2024-11-26 20:30:18.709590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:25.157 [2024-11-26 20:30:18.709660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:25.157 [2024-11-26 20:30:18.710158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:25.157 [2024-11-26 20:30:18.710459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:25.157 [2024-11-26 20:30:18.710482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:25.157 [2024-11-26 20:30:18.710723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.417 "name": "raid_bdev1", 00:17:25.417 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:25.417 "strip_size_kb": 0, 00:17:25.417 "state": "online", 00:17:25.417 "raid_level": "raid1", 00:17:25.417 "superblock": true, 00:17:25.417 "num_base_bdevs": 4, 00:17:25.417 "num_base_bdevs_discovered": 3, 00:17:25.417 "num_base_bdevs_operational": 3, 00:17:25.417 "base_bdevs_list": [ 00:17:25.417 { 00:17:25.417 "name": "spare", 00:17:25.417 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:25.417 "is_configured": true, 00:17:25.417 "data_offset": 2048, 00:17:25.417 "data_size": 63488 00:17:25.417 }, 00:17:25.417 { 00:17:25.417 "name": null, 00:17:25.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.417 "is_configured": false, 00:17:25.417 "data_offset": 2048, 00:17:25.417 "data_size": 63488 00:17:25.417 }, 00:17:25.417 { 00:17:25.417 "name": "BaseBdev3", 00:17:25.417 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:25.417 "is_configured": true, 00:17:25.417 "data_offset": 2048, 00:17:25.417 "data_size": 63488 00:17:25.417 }, 00:17:25.417 { 00:17:25.417 "name": "BaseBdev4", 00:17:25.417 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:25.417 "is_configured": true, 00:17:25.417 "data_offset": 2048, 00:17:25.417 "data_size": 63488 00:17:25.417 } 00:17:25.417 ] 00:17:25.417 }' 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.417 20:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.676 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.933 "name": "raid_bdev1", 00:17:25.933 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:25.933 "strip_size_kb": 0, 00:17:25.933 "state": "online", 00:17:25.933 "raid_level": "raid1", 00:17:25.933 "superblock": true, 00:17:25.933 "num_base_bdevs": 4, 00:17:25.933 "num_base_bdevs_discovered": 3, 00:17:25.933 "num_base_bdevs_operational": 3, 00:17:25.933 "base_bdevs_list": [ 00:17:25.933 { 00:17:25.933 "name": "spare", 00:17:25.933 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:25.933 "is_configured": true, 00:17:25.933 "data_offset": 2048, 00:17:25.933 "data_size": 63488 00:17:25.933 }, 00:17:25.933 { 00:17:25.933 "name": null, 00:17:25.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.933 "is_configured": false, 00:17:25.933 "data_offset": 2048, 00:17:25.933 "data_size": 63488 00:17:25.933 }, 00:17:25.933 { 00:17:25.933 "name": "BaseBdev3", 00:17:25.933 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:25.933 "is_configured": true, 00:17:25.933 "data_offset": 2048, 00:17:25.933 "data_size": 63488 00:17:25.933 }, 00:17:25.933 { 00:17:25.933 "name": "BaseBdev4", 00:17:25.933 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:25.933 "is_configured": true, 00:17:25.933 "data_offset": 2048, 00:17:25.933 "data_size": 63488 00:17:25.933 } 00:17:25.933 ] 00:17:25.933 }' 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.933 [2024-11-26 20:30:19.372687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.933 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.933 "name": "raid_bdev1", 00:17:25.933 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:25.933 "strip_size_kb": 0, 00:17:25.933 "state": "online", 00:17:25.933 "raid_level": "raid1", 00:17:25.933 "superblock": true, 00:17:25.933 "num_base_bdevs": 4, 00:17:25.933 "num_base_bdevs_discovered": 2, 00:17:25.933 "num_base_bdevs_operational": 2, 00:17:25.933 "base_bdevs_list": [ 00:17:25.933 { 00:17:25.933 "name": null, 00:17:25.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.934 "is_configured": false, 00:17:25.934 "data_offset": 0, 00:17:25.934 "data_size": 63488 00:17:25.934 }, 00:17:25.934 { 00:17:25.934 "name": null, 00:17:25.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.934 "is_configured": false, 00:17:25.934 "data_offset": 2048, 00:17:25.934 "data_size": 63488 00:17:25.934 }, 00:17:25.934 { 00:17:25.934 "name": "BaseBdev3", 00:17:25.934 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:25.934 "is_configured": true, 00:17:25.934 "data_offset": 2048, 00:17:25.934 "data_size": 63488 00:17:25.934 }, 00:17:25.934 { 00:17:25.934 "name": "BaseBdev4", 00:17:25.934 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:25.934 "is_configured": true, 00:17:25.934 "data_offset": 2048, 00:17:25.934 "data_size": 63488 00:17:25.934 } 00:17:25.934 ] 00:17:25.934 }' 00:17:25.934 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.934 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.501 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.501 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.501 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.501 [2024-11-26 20:30:19.867855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.501 [2024-11-26 20:30:19.868152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:26.501 [2024-11-26 20:30:19.868176] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:26.501 [2024-11-26 20:30:19.868227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.501 [2024-11-26 20:30:19.884926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:26.501 20:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.501 20:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:26.501 [2024-11-26 20:30:19.887257] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.437 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.437 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.437 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.437 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.438 "name": "raid_bdev1", 00:17:27.438 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:27.438 "strip_size_kb": 0, 00:17:27.438 "state": "online", 00:17:27.438 "raid_level": "raid1", 00:17:27.438 "superblock": true, 00:17:27.438 "num_base_bdevs": 4, 00:17:27.438 "num_base_bdevs_discovered": 3, 00:17:27.438 "num_base_bdevs_operational": 3, 00:17:27.438 "process": { 00:17:27.438 "type": "rebuild", 00:17:27.438 "target": "spare", 00:17:27.438 "progress": { 00:17:27.438 "blocks": 20480, 00:17:27.438 "percent": 32 00:17:27.438 } 00:17:27.438 }, 00:17:27.438 "base_bdevs_list": [ 00:17:27.438 { 00:17:27.438 "name": "spare", 00:17:27.438 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:27.438 "is_configured": true, 00:17:27.438 "data_offset": 2048, 00:17:27.438 "data_size": 63488 00:17:27.438 }, 00:17:27.438 { 00:17:27.438 "name": null, 00:17:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.438 "is_configured": false, 00:17:27.438 "data_offset": 2048, 00:17:27.438 "data_size": 63488 00:17:27.438 }, 00:17:27.438 { 00:17:27.438 "name": "BaseBdev3", 00:17:27.438 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:27.438 "is_configured": true, 00:17:27.438 "data_offset": 2048, 00:17:27.438 "data_size": 63488 00:17:27.438 }, 00:17:27.438 { 00:17:27.438 "name": "BaseBdev4", 00:17:27.438 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:27.438 "is_configured": true, 00:17:27.438 "data_offset": 2048, 00:17:27.438 "data_size": 63488 00:17:27.438 } 00:17:27.438 ] 00:17:27.438 }' 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.438 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.696 20:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.696 [2024-11-26 20:30:21.030123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.696 [2024-11-26 20:30:21.097908] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.696 [2024-11-26 20:30:21.098005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.696 [2024-11-26 20:30:21.098031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.696 [2024-11-26 20:30:21.098040] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.696 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.697 "name": "raid_bdev1", 00:17:27.697 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:27.697 "strip_size_kb": 0, 00:17:27.697 "state": "online", 00:17:27.697 "raid_level": "raid1", 00:17:27.697 "superblock": true, 00:17:27.697 "num_base_bdevs": 4, 00:17:27.697 "num_base_bdevs_discovered": 2, 00:17:27.697 "num_base_bdevs_operational": 2, 00:17:27.697 "base_bdevs_list": [ 00:17:27.697 { 00:17:27.697 "name": null, 00:17:27.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.697 "is_configured": false, 00:17:27.697 "data_offset": 0, 00:17:27.697 "data_size": 63488 00:17:27.697 }, 00:17:27.697 { 00:17:27.697 "name": null, 00:17:27.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.697 "is_configured": false, 00:17:27.697 "data_offset": 2048, 00:17:27.697 "data_size": 63488 00:17:27.697 }, 00:17:27.697 { 00:17:27.697 "name": "BaseBdev3", 00:17:27.697 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:27.697 "is_configured": true, 00:17:27.697 "data_offset": 2048, 00:17:27.697 "data_size": 63488 00:17:27.697 }, 00:17:27.697 { 00:17:27.697 "name": "BaseBdev4", 00:17:27.697 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:27.697 "is_configured": true, 00:17:27.697 "data_offset": 2048, 00:17:27.697 "data_size": 63488 00:17:27.697 } 00:17:27.697 ] 00:17:27.697 }' 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.697 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.265 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.265 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.265 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.265 [2024-11-26 20:30:21.571960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.265 [2024-11-26 20:30:21.572065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.265 [2024-11-26 20:30:21.572113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:28.265 [2024-11-26 20:30:21.572133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.265 [2024-11-26 20:30:21.572821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.265 [2024-11-26 20:30:21.572866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.265 [2024-11-26 20:30:21.573012] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:28.265 [2024-11-26 20:30:21.573036] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:28.265 [2024-11-26 20:30:21.573060] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:28.265 [2024-11-26 20:30:21.573087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.265 [2024-11-26 20:30:21.591973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:28.265 spare 00:17:28.265 20:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.265 20:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:28.265 [2024-11-26 20:30:21.594628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.200 "name": "raid_bdev1", 00:17:29.200 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:29.200 "strip_size_kb": 0, 00:17:29.200 "state": "online", 00:17:29.200 "raid_level": "raid1", 00:17:29.200 "superblock": true, 00:17:29.200 "num_base_bdevs": 4, 00:17:29.200 "num_base_bdevs_discovered": 3, 00:17:29.200 "num_base_bdevs_operational": 3, 00:17:29.200 "process": { 00:17:29.200 "type": "rebuild", 00:17:29.200 "target": "spare", 00:17:29.200 "progress": { 00:17:29.200 "blocks": 20480, 00:17:29.200 "percent": 32 00:17:29.200 } 00:17:29.200 }, 00:17:29.200 "base_bdevs_list": [ 00:17:29.200 { 00:17:29.200 "name": "spare", 00:17:29.200 "uuid": "b5943e89-59cf-5eb3-8659-46ba6a783a76", 00:17:29.200 "is_configured": true, 00:17:29.200 "data_offset": 2048, 00:17:29.200 "data_size": 63488 00:17:29.200 }, 00:17:29.200 { 00:17:29.200 "name": null, 00:17:29.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.200 "is_configured": false, 00:17:29.200 "data_offset": 2048, 00:17:29.200 "data_size": 63488 00:17:29.200 }, 00:17:29.200 { 00:17:29.200 "name": "BaseBdev3", 00:17:29.200 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:29.200 "is_configured": true, 00:17:29.200 "data_offset": 2048, 00:17:29.200 "data_size": 63488 00:17:29.200 }, 00:17:29.200 { 00:17:29.200 "name": "BaseBdev4", 00:17:29.200 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:29.200 "is_configured": true, 00:17:29.200 "data_offset": 2048, 00:17:29.200 "data_size": 63488 00:17:29.200 } 00:17:29.200 ] 00:17:29.200 }' 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.200 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.200 [2024-11-26 20:30:22.737806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.458 [2024-11-26 20:30:22.805678] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:29.458 [2024-11-26 20:30:22.805787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.458 [2024-11-26 20:30:22.805809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.458 [2024-11-26 20:30:22.805821] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.458 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.459 "name": "raid_bdev1", 00:17:29.459 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:29.459 "strip_size_kb": 0, 00:17:29.459 "state": "online", 00:17:29.459 "raid_level": "raid1", 00:17:29.459 "superblock": true, 00:17:29.459 "num_base_bdevs": 4, 00:17:29.459 "num_base_bdevs_discovered": 2, 00:17:29.459 "num_base_bdevs_operational": 2, 00:17:29.459 "base_bdevs_list": [ 00:17:29.459 { 00:17:29.459 "name": null, 00:17:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.459 "is_configured": false, 00:17:29.459 "data_offset": 0, 00:17:29.459 "data_size": 63488 00:17:29.459 }, 00:17:29.459 { 00:17:29.459 "name": null, 00:17:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.459 "is_configured": false, 00:17:29.459 "data_offset": 2048, 00:17:29.459 "data_size": 63488 00:17:29.459 }, 00:17:29.459 { 00:17:29.459 "name": "BaseBdev3", 00:17:29.459 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:29.459 "is_configured": true, 00:17:29.459 "data_offset": 2048, 00:17:29.459 "data_size": 63488 00:17:29.459 }, 00:17:29.459 { 00:17:29.459 "name": "BaseBdev4", 00:17:29.459 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:29.459 "is_configured": true, 00:17:29.459 "data_offset": 2048, 00:17:29.459 "data_size": 63488 00:17:29.459 } 00:17:29.459 ] 00:17:29.459 }' 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.459 20:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.025 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.026 "name": "raid_bdev1", 00:17:30.026 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:30.026 "strip_size_kb": 0, 00:17:30.026 "state": "online", 00:17:30.026 "raid_level": "raid1", 00:17:30.026 "superblock": true, 00:17:30.026 "num_base_bdevs": 4, 00:17:30.026 "num_base_bdevs_discovered": 2, 00:17:30.026 "num_base_bdevs_operational": 2, 00:17:30.026 "base_bdevs_list": [ 00:17:30.026 { 00:17:30.026 "name": null, 00:17:30.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.026 "is_configured": false, 00:17:30.026 "data_offset": 0, 00:17:30.026 "data_size": 63488 00:17:30.026 }, 00:17:30.026 { 00:17:30.026 "name": null, 00:17:30.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.026 "is_configured": false, 00:17:30.026 "data_offset": 2048, 00:17:30.026 "data_size": 63488 00:17:30.026 }, 00:17:30.026 { 00:17:30.026 "name": "BaseBdev3", 00:17:30.026 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:30.026 "is_configured": true, 00:17:30.026 "data_offset": 2048, 00:17:30.026 "data_size": 63488 00:17:30.026 }, 00:17:30.026 { 00:17:30.026 "name": "BaseBdev4", 00:17:30.026 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:30.026 "is_configured": true, 00:17:30.026 "data_offset": 2048, 00:17:30.026 "data_size": 63488 00:17:30.026 } 00:17:30.026 ] 00:17:30.026 }' 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.026 [2024-11-26 20:30:23.467498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.026 [2024-11-26 20:30:23.467593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.026 [2024-11-26 20:30:23.467620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:30.026 [2024-11-26 20:30:23.467635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.026 [2024-11-26 20:30:23.468297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.026 [2024-11-26 20:30:23.468336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.026 [2024-11-26 20:30:23.468453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:30.026 [2024-11-26 20:30:23.468482] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:30.026 [2024-11-26 20:30:23.468492] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:30.026 [2024-11-26 20:30:23.468524] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:30.026 BaseBdev1 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.026 20:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:30.963 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.963 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.963 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.963 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.963 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.964 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.223 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.223 "name": "raid_bdev1", 00:17:31.223 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:31.223 "strip_size_kb": 0, 00:17:31.223 "state": "online", 00:17:31.223 "raid_level": "raid1", 00:17:31.223 "superblock": true, 00:17:31.223 "num_base_bdevs": 4, 00:17:31.223 "num_base_bdevs_discovered": 2, 00:17:31.223 "num_base_bdevs_operational": 2, 00:17:31.223 "base_bdevs_list": [ 00:17:31.223 { 00:17:31.223 "name": null, 00:17:31.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.223 "is_configured": false, 00:17:31.223 "data_offset": 0, 00:17:31.223 "data_size": 63488 00:17:31.223 }, 00:17:31.223 { 00:17:31.223 "name": null, 00:17:31.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.223 "is_configured": false, 00:17:31.223 "data_offset": 2048, 00:17:31.223 "data_size": 63488 00:17:31.223 }, 00:17:31.223 { 00:17:31.223 "name": "BaseBdev3", 00:17:31.223 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:31.223 "is_configured": true, 00:17:31.223 "data_offset": 2048, 00:17:31.223 "data_size": 63488 00:17:31.223 }, 00:17:31.223 { 00:17:31.223 "name": "BaseBdev4", 00:17:31.223 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:31.223 "is_configured": true, 00:17:31.223 "data_offset": 2048, 00:17:31.223 "data_size": 63488 00:17:31.223 } 00:17:31.223 ] 00:17:31.223 }' 00:17:31.223 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.223 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.482 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.482 "name": "raid_bdev1", 00:17:31.482 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:31.482 "strip_size_kb": 0, 00:17:31.482 "state": "online", 00:17:31.482 "raid_level": "raid1", 00:17:31.482 "superblock": true, 00:17:31.482 "num_base_bdevs": 4, 00:17:31.482 "num_base_bdevs_discovered": 2, 00:17:31.482 "num_base_bdevs_operational": 2, 00:17:31.482 "base_bdevs_list": [ 00:17:31.482 { 00:17:31.482 "name": null, 00:17:31.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.482 "is_configured": false, 00:17:31.482 "data_offset": 0, 00:17:31.482 "data_size": 63488 00:17:31.482 }, 00:17:31.482 { 00:17:31.482 "name": null, 00:17:31.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.483 "is_configured": false, 00:17:31.483 "data_offset": 2048, 00:17:31.483 "data_size": 63488 00:17:31.483 }, 00:17:31.483 { 00:17:31.483 "name": "BaseBdev3", 00:17:31.483 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:31.483 "is_configured": true, 00:17:31.483 "data_offset": 2048, 00:17:31.483 "data_size": 63488 00:17:31.483 }, 00:17:31.483 { 00:17:31.483 "name": "BaseBdev4", 00:17:31.483 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:31.483 "is_configured": true, 00:17:31.483 "data_offset": 2048, 00:17:31.483 "data_size": 63488 00:17:31.483 } 00:17:31.483 ] 00:17:31.483 }' 00:17:31.483 20:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.483 20:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.483 20:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.746 [2024-11-26 20:30:25.089073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.746 [2024-11-26 20:30:25.089402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:31.746 [2024-11-26 20:30:25.089441] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:31.746 request: 00:17:31.746 { 00:17:31.746 "base_bdev": "BaseBdev1", 00:17:31.746 "raid_bdev": "raid_bdev1", 00:17:31.746 "method": "bdev_raid_add_base_bdev", 00:17:31.746 "req_id": 1 00:17:31.746 } 00:17:31.746 Got JSON-RPC error response 00:17:31.746 response: 00:17:31.746 { 00:17:31.746 "code": -22, 00:17:31.746 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:31.746 } 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.746 20:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.685 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.686 "name": "raid_bdev1", 00:17:32.686 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:32.686 "strip_size_kb": 0, 00:17:32.686 "state": "online", 00:17:32.686 "raid_level": "raid1", 00:17:32.686 "superblock": true, 00:17:32.686 "num_base_bdevs": 4, 00:17:32.686 "num_base_bdevs_discovered": 2, 00:17:32.686 "num_base_bdevs_operational": 2, 00:17:32.686 "base_bdevs_list": [ 00:17:32.686 { 00:17:32.686 "name": null, 00:17:32.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.686 "is_configured": false, 00:17:32.686 "data_offset": 0, 00:17:32.686 "data_size": 63488 00:17:32.686 }, 00:17:32.686 { 00:17:32.686 "name": null, 00:17:32.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.686 "is_configured": false, 00:17:32.686 "data_offset": 2048, 00:17:32.686 "data_size": 63488 00:17:32.686 }, 00:17:32.686 { 00:17:32.686 "name": "BaseBdev3", 00:17:32.686 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:32.686 "is_configured": true, 00:17:32.686 "data_offset": 2048, 00:17:32.686 "data_size": 63488 00:17:32.686 }, 00:17:32.686 { 00:17:32.686 "name": "BaseBdev4", 00:17:32.686 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:32.686 "is_configured": true, 00:17:32.686 "data_offset": 2048, 00:17:32.686 "data_size": 63488 00:17:32.686 } 00:17:32.686 ] 00:17:32.686 }' 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.686 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.252 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.253 "name": "raid_bdev1", 00:17:33.253 "uuid": "39e2549b-3e57-490e-adf3-ab36a2b056a9", 00:17:33.253 "strip_size_kb": 0, 00:17:33.253 "state": "online", 00:17:33.253 "raid_level": "raid1", 00:17:33.253 "superblock": true, 00:17:33.253 "num_base_bdevs": 4, 00:17:33.253 "num_base_bdevs_discovered": 2, 00:17:33.253 "num_base_bdevs_operational": 2, 00:17:33.253 "base_bdevs_list": [ 00:17:33.253 { 00:17:33.253 "name": null, 00:17:33.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.253 "is_configured": false, 00:17:33.253 "data_offset": 0, 00:17:33.253 "data_size": 63488 00:17:33.253 }, 00:17:33.253 { 00:17:33.253 "name": null, 00:17:33.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.253 "is_configured": false, 00:17:33.253 "data_offset": 2048, 00:17:33.253 "data_size": 63488 00:17:33.253 }, 00:17:33.253 { 00:17:33.253 "name": "BaseBdev3", 00:17:33.253 "uuid": "ae83cb92-9213-5166-8d4a-ac2c2dca854d", 00:17:33.253 "is_configured": true, 00:17:33.253 "data_offset": 2048, 00:17:33.253 "data_size": 63488 00:17:33.253 }, 00:17:33.253 { 00:17:33.253 "name": "BaseBdev4", 00:17:33.253 "uuid": "e0856f48-588d-5f7c-bb1c-87516b822e2d", 00:17:33.253 "is_configured": true, 00:17:33.253 "data_offset": 2048, 00:17:33.253 "data_size": 63488 00:17:33.253 } 00:17:33.253 ] 00:17:33.253 }' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78407 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78407 ']' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78407 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78407 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.253 killing process with pid 78407 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78407' 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78407 00:17:33.253 Received shutdown signal, test time was about 60.000000 seconds 00:17:33.253 00:17:33.253 Latency(us) 00:17:33.253 [2024-11-26T20:30:26.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.253 [2024-11-26T20:30:26.808Z] =================================================================================================================== 00:17:33.253 [2024-11-26T20:30:26.808Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.253 [2024-11-26 20:30:26.748327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.253 20:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78407 00:17:33.253 [2024-11-26 20:30:26.748508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.253 [2024-11-26 20:30:26.748604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.253 [2024-11-26 20:30:26.748621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:34.188 [2024-11-26 20:30:27.393786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:35.567 00:17:35.567 real 0m27.255s 00:17:35.567 user 0m32.724s 00:17:35.567 sys 0m4.068s 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 ************************************ 00:17:35.567 END TEST raid_rebuild_test_sb 00:17:35.567 ************************************ 00:17:35.567 20:30:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:35.567 20:30:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:35.567 20:30:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.567 20:30:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 ************************************ 00:17:35.567 START TEST raid_rebuild_test_io 00:17:35.567 ************************************ 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79178 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79178 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79178 ']' 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:35.567 20:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:35.567 Zero copy mechanism will not be used. 00:17:35.567 [2024-11-26 20:30:29.012113] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:17:35.567 [2024-11-26 20:30:29.012265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79178 ] 00:17:35.825 [2024-11-26 20:30:29.187075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.825 [2024-11-26 20:30:29.322270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.083 [2024-11-26 20:30:29.545955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.083 [2024-11-26 20:30:29.546049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.342 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.342 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:36.342 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.342 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:36.342 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.342 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 BaseBdev1_malloc 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 [2024-11-26 20:30:29.941582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.604 [2024-11-26 20:30:29.941660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.604 [2024-11-26 20:30:29.941692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.604 [2024-11-26 20:30:29.941718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.604 [2024-11-26 20:30:29.944135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.604 [2024-11-26 20:30:29.944187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.604 BaseBdev1 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 BaseBdev2_malloc 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 [2024-11-26 20:30:29.995998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:36.604 [2024-11-26 20:30:29.996078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.604 [2024-11-26 20:30:29.996117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:36.604 [2024-11-26 20:30:29.996141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.604 [2024-11-26 20:30:29.998662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.604 [2024-11-26 20:30:29.998716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.604 BaseBdev2 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 BaseBdev3_malloc 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 [2024-11-26 20:30:30.062115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:36.604 [2024-11-26 20:30:30.062196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.604 [2024-11-26 20:30:30.062231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:36.604 [2024-11-26 20:30:30.062275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.604 [2024-11-26 20:30:30.064705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.604 [2024-11-26 20:30:30.064759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:36.604 BaseBdev3 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 BaseBdev4_malloc 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.604 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 [2024-11-26 20:30:30.123314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:36.604 [2024-11-26 20:30:30.123401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.604 [2024-11-26 20:30:30.123440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:36.604 [2024-11-26 20:30:30.123468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.604 [2024-11-26 20:30:30.125901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.605 [2024-11-26 20:30:30.125962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:36.605 BaseBdev4 00:17:36.605 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.605 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:36.605 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.605 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.867 spare_malloc 00:17:36.867 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 spare_delay 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 [2024-11-26 20:30:30.188942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.868 [2024-11-26 20:30:30.189018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.868 [2024-11-26 20:30:30.189055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:36.868 [2024-11-26 20:30:30.189083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.868 [2024-11-26 20:30:30.191584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.868 [2024-11-26 20:30:30.191637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.868 spare 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 [2024-11-26 20:30:30.200978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.868 [2024-11-26 20:30:30.203127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.868 [2024-11-26 20:30:30.203228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.868 [2024-11-26 20:30:30.203349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.868 [2024-11-26 20:30:30.203482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:36.868 [2024-11-26 20:30:30.203509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:36.868 [2024-11-26 20:30:30.203874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.868 [2024-11-26 20:30:30.204126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:36.868 [2024-11-26 20:30:30.204152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:36.868 [2024-11-26 20:30:30.204387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.868 "name": "raid_bdev1", 00:17:36.868 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:36.868 "strip_size_kb": 0, 00:17:36.868 "state": "online", 00:17:36.868 "raid_level": "raid1", 00:17:36.868 "superblock": false, 00:17:36.868 "num_base_bdevs": 4, 00:17:36.868 "num_base_bdevs_discovered": 4, 00:17:36.868 "num_base_bdevs_operational": 4, 00:17:36.868 "base_bdevs_list": [ 00:17:36.868 { 00:17:36.868 "name": "BaseBdev1", 00:17:36.868 "uuid": "138b54c1-015d-5acc-b5f2-778f7afced4d", 00:17:36.868 "is_configured": true, 00:17:36.868 "data_offset": 0, 00:17:36.868 "data_size": 65536 00:17:36.868 }, 00:17:36.868 { 00:17:36.868 "name": "BaseBdev2", 00:17:36.868 "uuid": "ad55f56c-b691-5901-a88b-94f9e6ff9973", 00:17:36.868 "is_configured": true, 00:17:36.868 "data_offset": 0, 00:17:36.868 "data_size": 65536 00:17:36.868 }, 00:17:36.868 { 00:17:36.868 "name": "BaseBdev3", 00:17:36.868 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:36.868 "is_configured": true, 00:17:36.868 "data_offset": 0, 00:17:36.868 "data_size": 65536 00:17:36.868 }, 00:17:36.868 { 00:17:36.868 "name": "BaseBdev4", 00:17:36.868 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:36.868 "is_configured": true, 00:17:36.868 "data_offset": 0, 00:17:36.868 "data_size": 65536 00:17:36.868 } 00:17:36.868 ] 00:17:36.868 }' 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.868 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.436 [2024-11-26 20:30:30.700521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.436 [2024-11-26 20:30:30.787979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.436 "name": "raid_bdev1", 00:17:37.436 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:37.436 "strip_size_kb": 0, 00:17:37.436 "state": "online", 00:17:37.436 "raid_level": "raid1", 00:17:37.436 "superblock": false, 00:17:37.436 "num_base_bdevs": 4, 00:17:37.436 "num_base_bdevs_discovered": 3, 00:17:37.436 "num_base_bdevs_operational": 3, 00:17:37.436 "base_bdevs_list": [ 00:17:37.436 { 00:17:37.436 "name": null, 00:17:37.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.436 "is_configured": false, 00:17:37.436 "data_offset": 0, 00:17:37.436 "data_size": 65536 00:17:37.436 }, 00:17:37.436 { 00:17:37.436 "name": "BaseBdev2", 00:17:37.436 "uuid": "ad55f56c-b691-5901-a88b-94f9e6ff9973", 00:17:37.436 "is_configured": true, 00:17:37.436 "data_offset": 0, 00:17:37.436 "data_size": 65536 00:17:37.436 }, 00:17:37.436 { 00:17:37.436 "name": "BaseBdev3", 00:17:37.436 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:37.436 "is_configured": true, 00:17:37.436 "data_offset": 0, 00:17:37.436 "data_size": 65536 00:17:37.436 }, 00:17:37.436 { 00:17:37.436 "name": "BaseBdev4", 00:17:37.436 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:37.436 "is_configured": true, 00:17:37.436 "data_offset": 0, 00:17:37.436 "data_size": 65536 00:17:37.436 } 00:17:37.436 ] 00:17:37.436 }' 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.436 20:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.436 [2024-11-26 20:30:30.897299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:37.436 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.436 Zero copy mechanism will not be used. 00:17:37.436 Running I/O for 60 seconds... 00:17:38.002 20:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.002 20:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.002 20:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.002 [2024-11-26 20:30:31.278841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.002 20:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.002 20:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:38.002 [2024-11-26 20:30:31.379810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:38.002 [2024-11-26 20:30:31.382161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.002 [2024-11-26 20:30:31.511023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:38.002 [2024-11-26 20:30:31.511696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:38.261 [2024-11-26 20:30:31.739515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:38.261 [2024-11-26 20:30:31.739886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:38.779 148.00 IOPS, 444.00 MiB/s [2024-11-26T20:30:32.334Z] [2024-11-26 20:30:32.154859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.038 "name": "raid_bdev1", 00:17:39.038 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:39.038 "strip_size_kb": 0, 00:17:39.038 "state": "online", 00:17:39.038 "raid_level": "raid1", 00:17:39.038 "superblock": false, 00:17:39.038 "num_base_bdevs": 4, 00:17:39.038 "num_base_bdevs_discovered": 4, 00:17:39.038 "num_base_bdevs_operational": 4, 00:17:39.038 "process": { 00:17:39.038 "type": "rebuild", 00:17:39.038 "target": "spare", 00:17:39.038 "progress": { 00:17:39.038 "blocks": 12288, 00:17:39.038 "percent": 18 00:17:39.038 } 00:17:39.038 }, 00:17:39.038 "base_bdevs_list": [ 00:17:39.038 { 00:17:39.038 "name": "spare", 00:17:39.038 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:39.038 "is_configured": true, 00:17:39.038 "data_offset": 0, 00:17:39.038 "data_size": 65536 00:17:39.038 }, 00:17:39.038 { 00:17:39.038 "name": "BaseBdev2", 00:17:39.038 "uuid": "ad55f56c-b691-5901-a88b-94f9e6ff9973", 00:17:39.038 "is_configured": true, 00:17:39.038 "data_offset": 0, 00:17:39.038 "data_size": 65536 00:17:39.038 }, 00:17:39.038 { 00:17:39.038 "name": "BaseBdev3", 00:17:39.038 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:39.038 "is_configured": true, 00:17:39.038 "data_offset": 0, 00:17:39.038 "data_size": 65536 00:17:39.038 }, 00:17:39.038 { 00:17:39.038 "name": "BaseBdev4", 00:17:39.038 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:39.038 "is_configured": true, 00:17:39.038 "data_offset": 0, 00:17:39.038 "data_size": 65536 00:17:39.038 } 00:17:39.038 ] 00:17:39.038 }' 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 [2024-11-26 20:30:32.498100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.038 [2024-11-26 20:30:32.501194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:39.038 [2024-11-26 20:30:32.581842] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.038 [2024-11-26 20:30:32.585058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.038 [2024-11-26 20:30:32.585114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.038 [2024-11-26 20:30:32.585130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.298 [2024-11-26 20:30:32.626938] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.298 "name": "raid_bdev1", 00:17:39.298 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:39.298 "strip_size_kb": 0, 00:17:39.298 "state": "online", 00:17:39.298 "raid_level": "raid1", 00:17:39.298 "superblock": false, 00:17:39.298 "num_base_bdevs": 4, 00:17:39.298 "num_base_bdevs_discovered": 3, 00:17:39.298 "num_base_bdevs_operational": 3, 00:17:39.298 "base_bdevs_list": [ 00:17:39.298 { 00:17:39.298 "name": null, 00:17:39.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.298 "is_configured": false, 00:17:39.298 "data_offset": 0, 00:17:39.298 "data_size": 65536 00:17:39.298 }, 00:17:39.298 { 00:17:39.298 "name": "BaseBdev2", 00:17:39.298 "uuid": "ad55f56c-b691-5901-a88b-94f9e6ff9973", 00:17:39.298 "is_configured": true, 00:17:39.298 "data_offset": 0, 00:17:39.298 "data_size": 65536 00:17:39.298 }, 00:17:39.298 { 00:17:39.298 "name": "BaseBdev3", 00:17:39.298 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:39.298 "is_configured": true, 00:17:39.298 "data_offset": 0, 00:17:39.298 "data_size": 65536 00:17:39.298 }, 00:17:39.298 { 00:17:39.298 "name": "BaseBdev4", 00:17:39.298 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:39.298 "is_configured": true, 00:17:39.298 "data_offset": 0, 00:17:39.298 "data_size": 65536 00:17:39.298 } 00:17:39.298 ] 00:17:39.298 }' 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.298 20:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.557 139.00 IOPS, 417.00 MiB/s [2024-11-26T20:30:33.112Z] 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.557 20:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.817 20:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.817 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.817 "name": "raid_bdev1", 00:17:39.817 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:39.817 "strip_size_kb": 0, 00:17:39.817 "state": "online", 00:17:39.817 "raid_level": "raid1", 00:17:39.817 "superblock": false, 00:17:39.817 "num_base_bdevs": 4, 00:17:39.817 "num_base_bdevs_discovered": 3, 00:17:39.817 "num_base_bdevs_operational": 3, 00:17:39.817 "base_bdevs_list": [ 00:17:39.817 { 00:17:39.817 "name": null, 00:17:39.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.817 "is_configured": false, 00:17:39.817 "data_offset": 0, 00:17:39.817 "data_size": 65536 00:17:39.817 }, 00:17:39.817 { 00:17:39.817 "name": "BaseBdev2", 00:17:39.817 "uuid": "ad55f56c-b691-5901-a88b-94f9e6ff9973", 00:17:39.817 "is_configured": true, 00:17:39.817 "data_offset": 0, 00:17:39.817 "data_size": 65536 00:17:39.817 }, 00:17:39.817 { 00:17:39.817 "name": "BaseBdev3", 00:17:39.817 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:39.817 "is_configured": true, 00:17:39.818 "data_offset": 0, 00:17:39.818 "data_size": 65536 00:17:39.818 }, 00:17:39.818 { 00:17:39.818 "name": "BaseBdev4", 00:17:39.818 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:39.818 "is_configured": true, 00:17:39.818 "data_offset": 0, 00:17:39.818 "data_size": 65536 00:17:39.818 } 00:17:39.818 ] 00:17:39.818 }' 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.818 [2024-11-26 20:30:33.235931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.818 20:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:39.818 [2024-11-26 20:30:33.319798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:39.818 [2024-11-26 20:30:33.322093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.077 [2024-11-26 20:30:33.433525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:40.077 [2024-11-26 20:30:33.434155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:40.077 [2024-11-26 20:30:33.545020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:40.077 [2024-11-26 20:30:33.545414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:40.336 [2024-11-26 20:30:33.815735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:40.594 131.33 IOPS, 394.00 MiB/s [2024-11-26T20:30:34.149Z] [2024-11-26 20:30:33.937173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:40.594 [2024-11-26 20:30:33.937542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:40.853 [2024-11-26 20:30:34.263257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.853 "name": "raid_bdev1", 00:17:40.853 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:40.853 "strip_size_kb": 0, 00:17:40.853 "state": "online", 00:17:40.853 "raid_level": "raid1", 00:17:40.853 "superblock": false, 00:17:40.853 "num_base_bdevs": 4, 00:17:40.853 "num_base_bdevs_discovered": 4, 00:17:40.853 "num_base_bdevs_operational": 4, 00:17:40.853 "process": { 00:17:40.853 "type": "rebuild", 00:17:40.853 "target": "spare", 00:17:40.853 "progress": { 00:17:40.853 "blocks": 14336, 00:17:40.853 "percent": 21 00:17:40.853 } 00:17:40.853 }, 00:17:40.853 "base_bdevs_list": [ 00:17:40.853 { 00:17:40.853 "name": "spare", 00:17:40.853 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:40.853 "is_configured": true, 00:17:40.853 "data_offset": 0, 00:17:40.853 "data_size": 65536 00:17:40.853 }, 00:17:40.853 { 00:17:40.853 "name": "BaseBdev2", 00:17:40.853 "uuid": "ad55f56c-b691-5901-a88b-94f9e6ff9973", 00:17:40.853 "is_configured": true, 00:17:40.853 "data_offset": 0, 00:17:40.853 "data_size": 65536 00:17:40.853 }, 00:17:40.853 { 00:17:40.853 "name": "BaseBdev3", 00:17:40.853 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:40.853 "is_configured": true, 00:17:40.853 "data_offset": 0, 00:17:40.853 "data_size": 65536 00:17:40.853 }, 00:17:40.853 { 00:17:40.853 "name": "BaseBdev4", 00:17:40.853 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:40.853 "is_configured": true, 00:17:40.853 "data_offset": 0, 00:17:40.853 "data_size": 65536 00:17:40.853 } 00:17:40.853 ] 00:17:40.853 }' 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.853 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.113 [2024-11-26 20:30:34.429614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.113 [2024-11-26 20:30:34.514432] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:41.113 [2024-11-26 20:30:34.514486] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.113 "name": "raid_bdev1", 00:17:41.113 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:41.113 "strip_size_kb": 0, 00:17:41.113 "state": "online", 00:17:41.113 "raid_level": "raid1", 00:17:41.113 "superblock": false, 00:17:41.113 "num_base_bdevs": 4, 00:17:41.113 "num_base_bdevs_discovered": 3, 00:17:41.113 "num_base_bdevs_operational": 3, 00:17:41.113 "process": { 00:17:41.113 "type": "rebuild", 00:17:41.113 "target": "spare", 00:17:41.113 "progress": { 00:17:41.113 "blocks": 16384, 00:17:41.113 "percent": 25 00:17:41.113 } 00:17:41.113 }, 00:17:41.113 "base_bdevs_list": [ 00:17:41.113 { 00:17:41.113 "name": "spare", 00:17:41.113 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:41.113 "is_configured": true, 00:17:41.113 "data_offset": 0, 00:17:41.113 "data_size": 65536 00:17:41.113 }, 00:17:41.113 { 00:17:41.113 "name": null, 00:17:41.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.113 "is_configured": false, 00:17:41.113 "data_offset": 0, 00:17:41.113 "data_size": 65536 00:17:41.113 }, 00:17:41.113 { 00:17:41.113 "name": "BaseBdev3", 00:17:41.113 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:41.113 "is_configured": true, 00:17:41.113 "data_offset": 0, 00:17:41.113 "data_size": 65536 00:17:41.113 }, 00:17:41.113 { 00:17:41.113 "name": "BaseBdev4", 00:17:41.113 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:41.113 "is_configured": true, 00:17:41.113 "data_offset": 0, 00:17:41.113 "data_size": 65536 00:17:41.113 } 00:17:41.113 ] 00:17:41.113 }' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.113 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.374 "name": "raid_bdev1", 00:17:41.374 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:41.374 "strip_size_kb": 0, 00:17:41.374 "state": "online", 00:17:41.374 "raid_level": "raid1", 00:17:41.374 "superblock": false, 00:17:41.374 "num_base_bdevs": 4, 00:17:41.374 "num_base_bdevs_discovered": 3, 00:17:41.374 "num_base_bdevs_operational": 3, 00:17:41.374 "process": { 00:17:41.374 "type": "rebuild", 00:17:41.374 "target": "spare", 00:17:41.374 "progress": { 00:17:41.374 "blocks": 18432, 00:17:41.374 "percent": 28 00:17:41.374 } 00:17:41.374 }, 00:17:41.374 "base_bdevs_list": [ 00:17:41.374 { 00:17:41.374 "name": "spare", 00:17:41.374 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:41.374 "is_configured": true, 00:17:41.374 "data_offset": 0, 00:17:41.374 "data_size": 65536 00:17:41.374 }, 00:17:41.374 { 00:17:41.374 "name": null, 00:17:41.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.374 "is_configured": false, 00:17:41.374 "data_offset": 0, 00:17:41.374 "data_size": 65536 00:17:41.374 }, 00:17:41.374 { 00:17:41.374 "name": "BaseBdev3", 00:17:41.374 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:41.374 "is_configured": true, 00:17:41.374 "data_offset": 0, 00:17:41.374 "data_size": 65536 00:17:41.374 }, 00:17:41.374 { 00:17:41.374 "name": "BaseBdev4", 00:17:41.374 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:41.374 "is_configured": true, 00:17:41.374 "data_offset": 0, 00:17:41.374 "data_size": 65536 00:17:41.374 } 00:17:41.374 ] 00:17:41.374 }' 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.374 20:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.634 116.25 IOPS, 348.75 MiB/s [2024-11-26T20:30:35.189Z] [2024-11-26 20:30:35.166009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:41.634 [2024-11-26 20:30:35.166657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:42.205 [2024-11-26 20:30:35.482447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:42.205 [2024-11-26 20:30:35.483037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:42.205 [2024-11-26 20:30:35.608020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:42.205 [2024-11-26 20:30:35.608398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.464 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.465 "name": "raid_bdev1", 00:17:42.465 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:42.465 "strip_size_kb": 0, 00:17:42.465 "state": "online", 00:17:42.465 "raid_level": "raid1", 00:17:42.465 "superblock": false, 00:17:42.465 "num_base_bdevs": 4, 00:17:42.465 "num_base_bdevs_discovered": 3, 00:17:42.465 "num_base_bdevs_operational": 3, 00:17:42.465 "process": { 00:17:42.465 "type": "rebuild", 00:17:42.465 "target": "spare", 00:17:42.465 "progress": { 00:17:42.465 "blocks": 36864, 00:17:42.465 "percent": 56 00:17:42.465 } 00:17:42.465 }, 00:17:42.465 "base_bdevs_list": [ 00:17:42.465 { 00:17:42.465 "name": "spare", 00:17:42.465 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:42.465 "is_configured": true, 00:17:42.465 "data_offset": 0, 00:17:42.465 "data_size": 65536 00:17:42.465 }, 00:17:42.465 { 00:17:42.465 "name": null, 00:17:42.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.465 "is_configured": false, 00:17:42.465 "data_offset": 0, 00:17:42.465 "data_size": 65536 00:17:42.465 }, 00:17:42.465 { 00:17:42.465 "name": "BaseBdev3", 00:17:42.465 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:42.465 "is_configured": true, 00:17:42.465 "data_offset": 0, 00:17:42.465 "data_size": 65536 00:17:42.465 }, 00:17:42.465 { 00:17:42.465 "name": "BaseBdev4", 00:17:42.465 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:42.465 "is_configured": true, 00:17:42.465 "data_offset": 0, 00:17:42.465 "data_size": 65536 00:17:42.465 } 00:17:42.465 ] 00:17:42.465 }' 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.465 103.40 IOPS, 310.20 MiB/s [2024-11-26T20:30:36.020Z] 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.465 [2024-11-26 20:30:35.933141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.465 20:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.724 [2024-11-26 20:30:36.046200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:43.293 [2024-11-26 20:30:36.728100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:43.552 95.00 IOPS, 285.00 MiB/s [2024-11-26T20:30:37.107Z] 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.552 20:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.552 20:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.552 "name": "raid_bdev1", 00:17:43.552 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:43.552 "strip_size_kb": 0, 00:17:43.552 "state": "online", 00:17:43.552 "raid_level": "raid1", 00:17:43.552 "superblock": false, 00:17:43.552 "num_base_bdevs": 4, 00:17:43.552 "num_base_bdevs_discovered": 3, 00:17:43.552 "num_base_bdevs_operational": 3, 00:17:43.552 "process": { 00:17:43.552 "type": "rebuild", 00:17:43.552 "target": "spare", 00:17:43.552 "progress": { 00:17:43.552 "blocks": 57344, 00:17:43.552 "percent": 87 00:17:43.552 } 00:17:43.552 }, 00:17:43.552 "base_bdevs_list": [ 00:17:43.552 { 00:17:43.552 "name": "spare", 00:17:43.552 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:43.552 "is_configured": true, 00:17:43.552 "data_offset": 0, 00:17:43.552 "data_size": 65536 00:17:43.552 }, 00:17:43.552 { 00:17:43.552 "name": null, 00:17:43.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.552 "is_configured": false, 00:17:43.552 "data_offset": 0, 00:17:43.552 "data_size": 65536 00:17:43.552 }, 00:17:43.552 { 00:17:43.552 "name": "BaseBdev3", 00:17:43.552 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:43.552 "is_configured": true, 00:17:43.552 "data_offset": 0, 00:17:43.552 "data_size": 65536 00:17:43.552 }, 00:17:43.552 { 00:17:43.552 "name": "BaseBdev4", 00:17:43.552 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:43.552 "is_configured": true, 00:17:43.552 "data_offset": 0, 00:17:43.552 "data_size": 65536 00:17:43.552 } 00:17:43.552 ] 00:17:43.552 }' 00:17:43.552 20:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.552 20:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.552 20:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.552 20:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.552 20:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.811 [2024-11-26 20:30:37.358541] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:44.101 [2024-11-26 20:30:37.398825] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:44.101 [2024-11-26 20:30:37.403463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.618 86.14 IOPS, 258.43 MiB/s [2024-11-26T20:30:38.173Z] 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.618 "name": "raid_bdev1", 00:17:44.618 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:44.618 "strip_size_kb": 0, 00:17:44.618 "state": "online", 00:17:44.618 "raid_level": "raid1", 00:17:44.618 "superblock": false, 00:17:44.618 "num_base_bdevs": 4, 00:17:44.618 "num_base_bdevs_discovered": 3, 00:17:44.618 "num_base_bdevs_operational": 3, 00:17:44.618 "base_bdevs_list": [ 00:17:44.618 { 00:17:44.618 "name": "spare", 00:17:44.618 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:44.618 "is_configured": true, 00:17:44.618 "data_offset": 0, 00:17:44.618 "data_size": 65536 00:17:44.618 }, 00:17:44.618 { 00:17:44.618 "name": null, 00:17:44.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.618 "is_configured": false, 00:17:44.618 "data_offset": 0, 00:17:44.618 "data_size": 65536 00:17:44.618 }, 00:17:44.618 { 00:17:44.618 "name": "BaseBdev3", 00:17:44.618 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:44.618 "is_configured": true, 00:17:44.618 "data_offset": 0, 00:17:44.618 "data_size": 65536 00:17:44.618 }, 00:17:44.618 { 00:17:44.618 "name": "BaseBdev4", 00:17:44.618 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:44.618 "is_configured": true, 00:17:44.618 "data_offset": 0, 00:17:44.618 "data_size": 65536 00:17:44.618 } 00:17:44.618 ] 00:17:44.618 }' 00:17:44.618 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.878 "name": "raid_bdev1", 00:17:44.878 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:44.878 "strip_size_kb": 0, 00:17:44.878 "state": "online", 00:17:44.878 "raid_level": "raid1", 00:17:44.878 "superblock": false, 00:17:44.878 "num_base_bdevs": 4, 00:17:44.878 "num_base_bdevs_discovered": 3, 00:17:44.878 "num_base_bdevs_operational": 3, 00:17:44.878 "base_bdevs_list": [ 00:17:44.878 { 00:17:44.878 "name": "spare", 00:17:44.878 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:44.878 "is_configured": true, 00:17:44.878 "data_offset": 0, 00:17:44.878 "data_size": 65536 00:17:44.878 }, 00:17:44.878 { 00:17:44.878 "name": null, 00:17:44.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.878 "is_configured": false, 00:17:44.878 "data_offset": 0, 00:17:44.878 "data_size": 65536 00:17:44.878 }, 00:17:44.878 { 00:17:44.878 "name": "BaseBdev3", 00:17:44.878 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:44.878 "is_configured": true, 00:17:44.878 "data_offset": 0, 00:17:44.878 "data_size": 65536 00:17:44.878 }, 00:17:44.878 { 00:17:44.878 "name": "BaseBdev4", 00:17:44.878 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:44.878 "is_configured": true, 00:17:44.878 "data_offset": 0, 00:17:44.878 "data_size": 65536 00:17:44.878 } 00:17:44.878 ] 00:17:44.878 }' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.878 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.138 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.138 "name": "raid_bdev1", 00:17:45.138 "uuid": "3a1ee73e-55ac-4754-b88d-55b95b164612", 00:17:45.138 "strip_size_kb": 0, 00:17:45.138 "state": "online", 00:17:45.138 "raid_level": "raid1", 00:17:45.138 "superblock": false, 00:17:45.138 "num_base_bdevs": 4, 00:17:45.138 "num_base_bdevs_discovered": 3, 00:17:45.138 "num_base_bdevs_operational": 3, 00:17:45.138 "base_bdevs_list": [ 00:17:45.138 { 00:17:45.138 "name": "spare", 00:17:45.138 "uuid": "518dc3da-dbe2-569b-bd35-610368563af4", 00:17:45.138 "is_configured": true, 00:17:45.138 "data_offset": 0, 00:17:45.138 "data_size": 65536 00:17:45.138 }, 00:17:45.138 { 00:17:45.138 "name": null, 00:17:45.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.138 "is_configured": false, 00:17:45.138 "data_offset": 0, 00:17:45.138 "data_size": 65536 00:17:45.138 }, 00:17:45.138 { 00:17:45.138 "name": "BaseBdev3", 00:17:45.138 "uuid": "c8405af9-4ecb-5781-9ccb-67bc3706b2fb", 00:17:45.138 "is_configured": true, 00:17:45.138 "data_offset": 0, 00:17:45.138 "data_size": 65536 00:17:45.138 }, 00:17:45.138 { 00:17:45.138 "name": "BaseBdev4", 00:17:45.138 "uuid": "025272ac-0a9a-52fa-86d6-5c62df9da892", 00:17:45.138 "is_configured": true, 00:17:45.138 "data_offset": 0, 00:17:45.138 "data_size": 65536 00:17:45.138 } 00:17:45.138 ] 00:17:45.138 }' 00:17:45.138 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.138 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.398 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.398 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.398 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.398 [2024-11-26 20:30:38.856597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.398 [2024-11-26 20:30:38.856742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.398 80.25 IOPS, 240.75 MiB/s 00:17:45.398 Latency(us) 00:17:45.398 [2024-11-26T20:30:38.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.398 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:45.398 raid_bdev1 : 8.06 79.94 239.82 0.00 0.00 17635.64 363.10 113099.68 00:17:45.398 [2024-11-26T20:30:38.953Z] =================================================================================================================== 00:17:45.398 [2024-11-26T20:30:38.953Z] Total : 79.94 239.82 0.00 0.00 17635.64 363.10 113099.68 00:17:45.657 [2024-11-26 20:30:38.967357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.657 [2024-11-26 20:30:38.967514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.657 [2024-11-26 20:30:38.967673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.657 [2024-11-26 20:30:38.967746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:45.657 { 00:17:45.657 "results": [ 00:17:45.657 { 00:17:45.657 "job": "raid_bdev1", 00:17:45.657 "core_mask": "0x1", 00:17:45.657 "workload": "randrw", 00:17:45.657 "percentage": 50, 00:17:45.657 "status": "finished", 00:17:45.657 "queue_depth": 2, 00:17:45.657 "io_size": 3145728, 00:17:45.657 "runtime": 8.056176, 00:17:45.657 "iops": 79.93867065466296, 00:17:45.657 "mibps": 239.81601196398887, 00:17:45.657 "io_failed": 0, 00:17:45.657 "io_timeout": 0, 00:17:45.657 "avg_latency_us": 17635.63579158643, 00:17:45.657 "min_latency_us": 363.0951965065502, 00:17:45.657 "max_latency_us": 113099.68209606987 00:17:45.657 } 00:17:45.657 ], 00:17:45.657 "core_count": 1 00:17:45.657 } 00:17:45.657 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.657 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:45.657 20:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.657 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.657 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.657 20:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.657 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:45.947 /dev/nbd0 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.947 1+0 records in 00:17:45.947 1+0 records out 00:17:45.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399056 s, 10.3 MB/s 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.947 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:46.206 /dev/nbd1 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.206 1+0 records in 00:17:46.206 1+0 records out 00:17:46.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697467 s, 5.9 MB/s 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.206 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.465 20:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.725 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:46.984 /dev/nbd1 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.985 1+0 records in 00:17:46.985 1+0 records out 00:17:46.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370396 s, 11.1 MB/s 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.985 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.244 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.244 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.245 20:30:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79178 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79178 ']' 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79178 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.503 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79178 00:17:47.761 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.761 killing process with pid 79178 00:17:47.761 Received shutdown signal, test time was about 10.204636 seconds 00:17:47.761 00:17:47.761 Latency(us) 00:17:47.761 [2024-11-26T20:30:41.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.761 [2024-11-26T20:30:41.316Z] =================================================================================================================== 00:17:47.761 [2024-11-26T20:30:41.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.761 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.761 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79178' 00:17:47.761 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79178 00:17:47.761 [2024-11-26 20:30:41.084693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.761 20:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79178 00:17:48.327 [2024-11-26 20:30:41.583580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.707 20:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:49.707 00:17:49.707 real 0m14.065s 00:17:49.707 user 0m17.931s 00:17:49.708 sys 0m1.933s 00:17:49.708 ************************************ 00:17:49.708 END TEST raid_rebuild_test_io 00:17:49.708 ************************************ 00:17:49.708 20:30:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.708 20:30:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.708 20:30:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:49.708 20:30:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:49.708 20:30:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.708 20:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.708 ************************************ 00:17:49.708 START TEST raid_rebuild_test_sb_io 00:17:49.708 ************************************ 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79597 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79597 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79597 ']' 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.708 20:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.708 [2024-11-26 20:30:43.154426] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:17:49.708 [2024-11-26 20:30:43.154665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79597 ] 00:17:49.708 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:49.708 Zero copy mechanism will not be used. 00:17:49.967 [2024-11-26 20:30:43.318234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.967 [2024-11-26 20:30:43.456093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.227 [2024-11-26 20:30:43.694949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.227 [2024-11-26 20:30:43.694996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.806 BaseBdev1_malloc 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.806 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.806 [2024-11-26 20:30:44.133901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:50.806 [2024-11-26 20:30:44.134045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.806 [2024-11-26 20:30:44.134080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:50.806 [2024-11-26 20:30:44.134094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.807 [2024-11-26 20:30:44.136614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.807 [2024-11-26 20:30:44.136662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:50.807 BaseBdev1 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.807 BaseBdev2_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.807 [2024-11-26 20:30:44.192276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:50.807 [2024-11-26 20:30:44.192348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.807 [2024-11-26 20:30:44.192374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:50.807 [2024-11-26 20:30:44.192387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.807 [2024-11-26 20:30:44.194890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.807 [2024-11-26 20:30:44.195013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:50.807 BaseBdev2 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.807 BaseBdev3_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.807 [2024-11-26 20:30:44.265067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:50.807 [2024-11-26 20:30:44.265207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.807 [2024-11-26 20:30:44.265257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:50.807 [2024-11-26 20:30:44.265272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.807 [2024-11-26 20:30:44.267763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.807 [2024-11-26 20:30:44.267810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:50.807 BaseBdev3 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.807 BaseBdev4_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.807 [2024-11-26 20:30:44.326580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:50.807 [2024-11-26 20:30:44.326670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.807 [2024-11-26 20:30:44.326699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:50.807 [2024-11-26 20:30:44.326712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.807 [2024-11-26 20:30:44.329202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.807 [2024-11-26 20:30:44.329272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:50.807 BaseBdev4 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.807 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.073 spare_malloc 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.073 spare_delay 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.073 [2024-11-26 20:30:44.399872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.073 [2024-11-26 20:30:44.399939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.073 [2024-11-26 20:30:44.399972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:51.073 [2024-11-26 20:30:44.399984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.073 [2024-11-26 20:30:44.402469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.073 [2024-11-26 20:30:44.402575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.073 spare 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.073 [2024-11-26 20:30:44.411877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.073 [2024-11-26 20:30:44.413981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.073 [2024-11-26 20:30:44.414128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.073 [2024-11-26 20:30:44.414210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.073 [2024-11-26 20:30:44.414472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.073 [2024-11-26 20:30:44.414491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.073 [2024-11-26 20:30:44.414841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.073 [2024-11-26 20:30:44.415054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.073 [2024-11-26 20:30:44.415067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.073 [2024-11-26 20:30:44.415299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.073 "name": "raid_bdev1", 00:17:51.073 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:51.073 "strip_size_kb": 0, 00:17:51.073 "state": "online", 00:17:51.073 "raid_level": "raid1", 00:17:51.073 "superblock": true, 00:17:51.073 "num_base_bdevs": 4, 00:17:51.073 "num_base_bdevs_discovered": 4, 00:17:51.073 "num_base_bdevs_operational": 4, 00:17:51.073 "base_bdevs_list": [ 00:17:51.073 { 00:17:51.073 "name": "BaseBdev1", 00:17:51.073 "uuid": "e7e6dcbd-7fdc-5fe6-8a28-459425ade7b3", 00:17:51.073 "is_configured": true, 00:17:51.073 "data_offset": 2048, 00:17:51.073 "data_size": 63488 00:17:51.073 }, 00:17:51.073 { 00:17:51.073 "name": "BaseBdev2", 00:17:51.073 "uuid": "874eb971-4685-5767-b11e-8019a8634ad5", 00:17:51.073 "is_configured": true, 00:17:51.073 "data_offset": 2048, 00:17:51.073 "data_size": 63488 00:17:51.073 }, 00:17:51.073 { 00:17:51.073 "name": "BaseBdev3", 00:17:51.073 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:51.073 "is_configured": true, 00:17:51.073 "data_offset": 2048, 00:17:51.073 "data_size": 63488 00:17:51.073 }, 00:17:51.073 { 00:17:51.073 "name": "BaseBdev4", 00:17:51.073 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:51.073 "is_configured": true, 00:17:51.073 "data_offset": 2048, 00:17:51.073 "data_size": 63488 00:17:51.073 } 00:17:51.073 ] 00:17:51.073 }' 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.073 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 [2024-11-26 20:30:44.915452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 20:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 [2024-11-26 20:30:45.002880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.642 "name": "raid_bdev1", 00:17:51.642 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:51.642 "strip_size_kb": 0, 00:17:51.642 "state": "online", 00:17:51.642 "raid_level": "raid1", 00:17:51.642 "superblock": true, 00:17:51.642 "num_base_bdevs": 4, 00:17:51.642 "num_base_bdevs_discovered": 3, 00:17:51.642 "num_base_bdevs_operational": 3, 00:17:51.642 "base_bdevs_list": [ 00:17:51.642 { 00:17:51.642 "name": null, 00:17:51.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.642 "is_configured": false, 00:17:51.642 "data_offset": 0, 00:17:51.642 "data_size": 63488 00:17:51.642 }, 00:17:51.642 { 00:17:51.642 "name": "BaseBdev2", 00:17:51.642 "uuid": "874eb971-4685-5767-b11e-8019a8634ad5", 00:17:51.642 "is_configured": true, 00:17:51.642 "data_offset": 2048, 00:17:51.642 "data_size": 63488 00:17:51.642 }, 00:17:51.642 { 00:17:51.643 "name": "BaseBdev3", 00:17:51.643 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:51.643 "is_configured": true, 00:17:51.643 "data_offset": 2048, 00:17:51.643 "data_size": 63488 00:17:51.643 }, 00:17:51.643 { 00:17:51.643 "name": "BaseBdev4", 00:17:51.643 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:51.643 "is_configured": true, 00:17:51.643 "data_offset": 2048, 00:17:51.643 "data_size": 63488 00:17:51.643 } 00:17:51.643 ] 00:17:51.643 }' 00:17:51.643 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.643 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.643 [2024-11-26 20:30:45.119897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:51.643 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:51.643 Zero copy mechanism will not be used. 00:17:51.643 Running I/O for 60 seconds... 00:17:51.902 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.902 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.902 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.902 [2024-11-26 20:30:45.449758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.162 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.162 20:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:52.162 [2024-11-26 20:30:45.520978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:52.162 [2024-11-26 20:30:45.523608] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.162 [2024-11-26 20:30:45.644768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:52.162 [2024-11-26 20:30:45.645440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:52.427 [2024-11-26 20:30:45.859994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:52.427 [2024-11-26 20:30:45.860857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:52.687 134.00 IOPS, 402.00 MiB/s [2024-11-26T20:30:46.242Z] [2024-11-26 20:30:46.201965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:52.946 [2024-11-26 20:30:46.328469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:52.946 [2024-11-26 20:30:46.329383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.207 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.207 "name": "raid_bdev1", 00:17:53.207 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:53.207 "strip_size_kb": 0, 00:17:53.207 "state": "online", 00:17:53.207 "raid_level": "raid1", 00:17:53.207 "superblock": true, 00:17:53.207 "num_base_bdevs": 4, 00:17:53.207 "num_base_bdevs_discovered": 4, 00:17:53.207 "num_base_bdevs_operational": 4, 00:17:53.207 "process": { 00:17:53.207 "type": "rebuild", 00:17:53.207 "target": "spare", 00:17:53.207 "progress": { 00:17:53.207 "blocks": 10240, 00:17:53.207 "percent": 16 00:17:53.207 } 00:17:53.207 }, 00:17:53.207 "base_bdevs_list": [ 00:17:53.207 { 00:17:53.207 "name": "spare", 00:17:53.207 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:53.207 "is_configured": true, 00:17:53.207 "data_offset": 2048, 00:17:53.207 "data_size": 63488 00:17:53.207 }, 00:17:53.207 { 00:17:53.207 "name": "BaseBdev2", 00:17:53.207 "uuid": "874eb971-4685-5767-b11e-8019a8634ad5", 00:17:53.207 "is_configured": true, 00:17:53.208 "data_offset": 2048, 00:17:53.208 "data_size": 63488 00:17:53.208 }, 00:17:53.208 { 00:17:53.208 "name": "BaseBdev3", 00:17:53.208 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:53.208 "is_configured": true, 00:17:53.208 "data_offset": 2048, 00:17:53.208 "data_size": 63488 00:17:53.208 }, 00:17:53.208 { 00:17:53.208 "name": "BaseBdev4", 00:17:53.208 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:53.208 "is_configured": true, 00:17:53.208 "data_offset": 2048, 00:17:53.208 "data_size": 63488 00:17:53.208 } 00:17:53.208 ] 00:17:53.208 }' 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.208 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 [2024-11-26 20:30:46.648637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.208 [2024-11-26 20:30:46.676268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:53.208 [2024-11-26 20:30:46.703783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.208 [2024-11-26 20:30:46.709685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.208 [2024-11-26 20:30:46.709755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.208 [2024-11-26 20:30:46.709775] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.208 [2024-11-26 20:30:46.746104] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.468 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.468 "name": "raid_bdev1", 00:17:53.468 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:53.468 "strip_size_kb": 0, 00:17:53.468 "state": "online", 00:17:53.468 "raid_level": "raid1", 00:17:53.468 "superblock": true, 00:17:53.468 "num_base_bdevs": 4, 00:17:53.468 "num_base_bdevs_discovered": 3, 00:17:53.468 "num_base_bdevs_operational": 3, 00:17:53.468 "base_bdevs_list": [ 00:17:53.468 { 00:17:53.468 "name": null, 00:17:53.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.468 "is_configured": false, 00:17:53.468 "data_offset": 0, 00:17:53.468 "data_size": 63488 00:17:53.468 }, 00:17:53.468 { 00:17:53.468 "name": "BaseBdev2", 00:17:53.468 "uuid": "874eb971-4685-5767-b11e-8019a8634ad5", 00:17:53.468 "is_configured": true, 00:17:53.468 "data_offset": 2048, 00:17:53.468 "data_size": 63488 00:17:53.468 }, 00:17:53.468 { 00:17:53.468 "name": "BaseBdev3", 00:17:53.468 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:53.468 "is_configured": true, 00:17:53.468 "data_offset": 2048, 00:17:53.468 "data_size": 63488 00:17:53.468 }, 00:17:53.468 { 00:17:53.468 "name": "BaseBdev4", 00:17:53.468 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:53.468 "is_configured": true, 00:17:53.468 "data_offset": 2048, 00:17:53.469 "data_size": 63488 00:17:53.469 } 00:17:53.469 ] 00:17:53.469 }' 00:17:53.469 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.469 20:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.729 136.00 IOPS, 408.00 MiB/s [2024-11-26T20:30:47.284Z] 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.729 "name": "raid_bdev1", 00:17:53.729 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:53.729 "strip_size_kb": 0, 00:17:53.729 "state": "online", 00:17:53.729 "raid_level": "raid1", 00:17:53.729 "superblock": true, 00:17:53.729 "num_base_bdevs": 4, 00:17:53.729 "num_base_bdevs_discovered": 3, 00:17:53.729 "num_base_bdevs_operational": 3, 00:17:53.729 "base_bdevs_list": [ 00:17:53.729 { 00:17:53.729 "name": null, 00:17:53.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.729 "is_configured": false, 00:17:53.729 "data_offset": 0, 00:17:53.729 "data_size": 63488 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "name": "BaseBdev2", 00:17:53.729 "uuid": "874eb971-4685-5767-b11e-8019a8634ad5", 00:17:53.729 "is_configured": true, 00:17:53.729 "data_offset": 2048, 00:17:53.729 "data_size": 63488 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "name": "BaseBdev3", 00:17:53.729 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:53.729 "is_configured": true, 00:17:53.729 "data_offset": 2048, 00:17:53.729 "data_size": 63488 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "name": "BaseBdev4", 00:17:53.729 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:53.729 "is_configured": true, 00:17:53.729 "data_offset": 2048, 00:17:53.729 "data_size": 63488 00:17:53.729 } 00:17:53.729 ] 00:17:53.729 }' 00:17:53.729 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.989 [2024-11-26 20:30:47.392424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.989 20:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:53.989 [2024-11-26 20:30:47.463058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:53.989 [2024-11-26 20:30:47.465633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.249 [2024-11-26 20:30:47.585560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:54.249 [2024-11-26 20:30:47.586348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:54.249 [2024-11-26 20:30:47.727978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:54.509 [2024-11-26 20:30:48.024332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:54.768 158.33 IOPS, 475.00 MiB/s [2024-11-26T20:30:48.323Z] [2024-11-26 20:30:48.168934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:54.768 [2024-11-26 20:30:48.169356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:55.027 [2024-11-26 20:30:48.414763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.027 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.027 "name": "raid_bdev1", 00:17:55.027 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:55.027 "strip_size_kb": 0, 00:17:55.027 "state": "online", 00:17:55.027 "raid_level": "raid1", 00:17:55.027 "superblock": true, 00:17:55.027 "num_base_bdevs": 4, 00:17:55.027 "num_base_bdevs_discovered": 4, 00:17:55.027 "num_base_bdevs_operational": 4, 00:17:55.027 "process": { 00:17:55.027 "type": "rebuild", 00:17:55.027 "target": "spare", 00:17:55.027 "progress": { 00:17:55.027 "blocks": 14336, 00:17:55.027 "percent": 22 00:17:55.027 } 00:17:55.027 }, 00:17:55.027 "base_bdevs_list": [ 00:17:55.027 { 00:17:55.027 "name": "spare", 00:17:55.027 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:55.027 "is_configured": true, 00:17:55.027 "data_offset": 2048, 00:17:55.027 "data_size": 63488 00:17:55.027 }, 00:17:55.027 { 00:17:55.027 "name": "BaseBdev2", 00:17:55.027 "uuid": "874eb971-4685-5767-b11e-8019a8634ad5", 00:17:55.027 "is_configured": true, 00:17:55.027 "data_offset": 2048, 00:17:55.027 "data_size": 63488 00:17:55.027 }, 00:17:55.027 { 00:17:55.027 "name": "BaseBdev3", 00:17:55.027 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:55.027 "is_configured": true, 00:17:55.027 "data_offset": 2048, 00:17:55.028 "data_size": 63488 00:17:55.028 }, 00:17:55.028 { 00:17:55.028 "name": "BaseBdev4", 00:17:55.028 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:55.028 "is_configured": true, 00:17:55.028 "data_offset": 2048, 00:17:55.028 "data_size": 63488 00:17:55.028 } 00:17:55.028 ] 00:17:55.028 }' 00:17:55.028 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.028 [2024-11-26 20:30:48.520510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:55.028 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.028 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:55.287 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.287 [2024-11-26 20:30:48.589831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.287 [2024-11-26 20:30:48.753564] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:55.287 [2024-11-26 20:30:48.753723] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.287 "name": "raid_bdev1", 00:17:55.287 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:55.287 "strip_size_kb": 0, 00:17:55.287 "state": "online", 00:17:55.287 "raid_level": "raid1", 00:17:55.287 "superblock": true, 00:17:55.287 "num_base_bdevs": 4, 00:17:55.287 "num_base_bdevs_discovered": 3, 00:17:55.287 "num_base_bdevs_operational": 3, 00:17:55.287 "process": { 00:17:55.287 "type": "rebuild", 00:17:55.287 "target": "spare", 00:17:55.287 "progress": { 00:17:55.287 "blocks": 18432, 00:17:55.287 "percent": 29 00:17:55.287 } 00:17:55.287 }, 00:17:55.287 "base_bdevs_list": [ 00:17:55.287 { 00:17:55.287 "name": "spare", 00:17:55.287 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:55.287 "is_configured": true, 00:17:55.287 "data_offset": 2048, 00:17:55.287 "data_size": 63488 00:17:55.287 }, 00:17:55.287 { 00:17:55.287 "name": null, 00:17:55.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.287 "is_configured": false, 00:17:55.287 "data_offset": 0, 00:17:55.287 "data_size": 63488 00:17:55.287 }, 00:17:55.287 { 00:17:55.287 "name": "BaseBdev3", 00:17:55.287 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:55.287 "is_configured": true, 00:17:55.287 "data_offset": 2048, 00:17:55.287 "data_size": 63488 00:17:55.287 }, 00:17:55.287 { 00:17:55.287 "name": "BaseBdev4", 00:17:55.287 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:55.287 "is_configured": true, 00:17:55.287 "data_offset": 2048, 00:17:55.287 "data_size": 63488 00:17:55.287 } 00:17:55.287 ] 00:17:55.287 }' 00:17:55.287 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=521 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.546 "name": "raid_bdev1", 00:17:55.546 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:55.546 "strip_size_kb": 0, 00:17:55.546 "state": "online", 00:17:55.546 "raid_level": "raid1", 00:17:55.546 "superblock": true, 00:17:55.546 "num_base_bdevs": 4, 00:17:55.546 "num_base_bdevs_discovered": 3, 00:17:55.546 "num_base_bdevs_operational": 3, 00:17:55.546 "process": { 00:17:55.546 "type": "rebuild", 00:17:55.546 "target": "spare", 00:17:55.546 "progress": { 00:17:55.546 "blocks": 20480, 00:17:55.546 "percent": 32 00:17:55.546 } 00:17:55.546 }, 00:17:55.546 "base_bdevs_list": [ 00:17:55.546 { 00:17:55.546 "name": "spare", 00:17:55.546 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:55.546 "is_configured": true, 00:17:55.546 "data_offset": 2048, 00:17:55.546 "data_size": 63488 00:17:55.546 }, 00:17:55.546 { 00:17:55.546 "name": null, 00:17:55.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.546 "is_configured": false, 00:17:55.546 "data_offset": 0, 00:17:55.546 "data_size": 63488 00:17:55.546 }, 00:17:55.546 { 00:17:55.546 "name": "BaseBdev3", 00:17:55.546 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:55.546 "is_configured": true, 00:17:55.546 "data_offset": 2048, 00:17:55.546 "data_size": 63488 00:17:55.546 }, 00:17:55.546 { 00:17:55.546 "name": "BaseBdev4", 00:17:55.546 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:55.546 "is_configured": true, 00:17:55.546 "data_offset": 2048, 00:17:55.546 "data_size": 63488 00:17:55.546 } 00:17:55.546 ] 00:17:55.546 }' 00:17:55.546 20:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.546 [2024-11-26 20:30:48.986612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:55.546 20:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.546 20:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.546 20:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.546 20:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.818 140.75 IOPS, 422.25 MiB/s [2024-11-26T20:30:49.373Z] [2024-11-26 20:30:49.328687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:56.077 [2024-11-26 20:30:49.462314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:56.336 [2024-11-26 20:30:49.696513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:56.336 [2024-11-26 20:30:49.697150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:56.595 [2024-11-26 20:30:49.918353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.595 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.595 "name": "raid_bdev1", 00:17:56.595 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:56.595 "strip_size_kb": 0, 00:17:56.595 "state": "online", 00:17:56.595 "raid_level": "raid1", 00:17:56.595 "superblock": true, 00:17:56.595 "num_base_bdevs": 4, 00:17:56.595 "num_base_bdevs_discovered": 3, 00:17:56.595 "num_base_bdevs_operational": 3, 00:17:56.595 "process": { 00:17:56.595 "type": "rebuild", 00:17:56.595 "target": "spare", 00:17:56.595 "progress": { 00:17:56.595 "blocks": 36864, 00:17:56.595 "percent": 58 00:17:56.595 } 00:17:56.595 }, 00:17:56.595 "base_bdevs_list": [ 00:17:56.595 { 00:17:56.595 "name": "spare", 00:17:56.595 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:56.595 "is_configured": true, 00:17:56.595 "data_offset": 2048, 00:17:56.595 "data_size": 63488 00:17:56.595 }, 00:17:56.595 { 00:17:56.595 "name": null, 00:17:56.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.595 "is_configured": false, 00:17:56.595 "data_offset": 0, 00:17:56.595 "data_size": 63488 00:17:56.595 }, 00:17:56.595 { 00:17:56.595 "name": "BaseBdev3", 00:17:56.595 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:56.595 "is_configured": true, 00:17:56.595 "data_offset": 2048, 00:17:56.595 "data_size": 63488 00:17:56.595 }, 00:17:56.595 { 00:17:56.595 "name": "BaseBdev4", 00:17:56.595 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:56.595 "is_configured": true, 00:17:56.595 "data_offset": 2048, 00:17:56.595 "data_size": 63488 00:17:56.595 } 00:17:56.595 ] 00:17:56.595 }' 00:17:56.595 124.60 IOPS, 373.80 MiB/s [2024-11-26T20:30:50.150Z] 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.854 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.854 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.854 [2024-11-26 20:30:50.217006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:56.854 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.854 20:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.113 [2024-11-26 20:30:50.436089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:57.113 [2024-11-26 20:30:50.436611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:57.681 [2024-11-26 20:30:51.100133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:57.941 111.33 IOPS, 334.00 MiB/s [2024-11-26T20:30:51.496Z] 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.941 "name": "raid_bdev1", 00:17:57.941 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:57.941 "strip_size_kb": 0, 00:17:57.941 "state": "online", 00:17:57.941 "raid_level": "raid1", 00:17:57.941 "superblock": true, 00:17:57.941 "num_base_bdevs": 4, 00:17:57.941 "num_base_bdevs_discovered": 3, 00:17:57.941 "num_base_bdevs_operational": 3, 00:17:57.941 "process": { 00:17:57.941 "type": "rebuild", 00:17:57.941 "target": "spare", 00:17:57.941 "progress": { 00:17:57.941 "blocks": 53248, 00:17:57.941 "percent": 83 00:17:57.941 } 00:17:57.941 }, 00:17:57.941 "base_bdevs_list": [ 00:17:57.941 { 00:17:57.941 "name": "spare", 00:17:57.941 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:57.941 "is_configured": true, 00:17:57.941 "data_offset": 2048, 00:17:57.941 "data_size": 63488 00:17:57.941 }, 00:17:57.941 { 00:17:57.941 "name": null, 00:17:57.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.941 "is_configured": false, 00:17:57.941 "data_offset": 0, 00:17:57.941 "data_size": 63488 00:17:57.941 }, 00:17:57.941 { 00:17:57.941 "name": "BaseBdev3", 00:17:57.941 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:57.941 "is_configured": true, 00:17:57.941 "data_offset": 2048, 00:17:57.941 "data_size": 63488 00:17:57.941 }, 00:17:57.941 { 00:17:57.941 "name": "BaseBdev4", 00:17:57.941 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:57.941 "is_configured": true, 00:17:57.941 "data_offset": 2048, 00:17:57.941 "data_size": 63488 00:17:57.941 } 00:17:57.941 ] 00:17:57.941 }' 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.941 20:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.509 [2024-11-26 20:30:51.776117] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:58.509 [2024-11-26 20:30:51.875938] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:58.509 [2024-11-26 20:30:51.879462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.072 101.43 IOPS, 304.29 MiB/s [2024-11-26T20:30:52.627Z] 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.072 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.072 "name": "raid_bdev1", 00:17:59.072 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:59.072 "strip_size_kb": 0, 00:17:59.072 "state": "online", 00:17:59.072 "raid_level": "raid1", 00:17:59.072 "superblock": true, 00:17:59.072 "num_base_bdevs": 4, 00:17:59.072 "num_base_bdevs_discovered": 3, 00:17:59.072 "num_base_bdevs_operational": 3, 00:17:59.072 "base_bdevs_list": [ 00:17:59.072 { 00:17:59.072 "name": "spare", 00:17:59.072 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:59.072 "is_configured": true, 00:17:59.072 "data_offset": 2048, 00:17:59.072 "data_size": 63488 00:17:59.072 }, 00:17:59.072 { 00:17:59.072 "name": null, 00:17:59.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.072 "is_configured": false, 00:17:59.072 "data_offset": 0, 00:17:59.072 "data_size": 63488 00:17:59.072 }, 00:17:59.072 { 00:17:59.073 "name": "BaseBdev3", 00:17:59.073 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:59.073 "is_configured": true, 00:17:59.073 "data_offset": 2048, 00:17:59.073 "data_size": 63488 00:17:59.073 }, 00:17:59.073 { 00:17:59.073 "name": "BaseBdev4", 00:17:59.073 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:59.073 "is_configured": true, 00:17:59.073 "data_offset": 2048, 00:17:59.073 "data_size": 63488 00:17:59.073 } 00:17:59.073 ] 00:17:59.073 }' 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.073 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.349 "name": "raid_bdev1", 00:17:59.349 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:59.349 "strip_size_kb": 0, 00:17:59.349 "state": "online", 00:17:59.349 "raid_level": "raid1", 00:17:59.349 "superblock": true, 00:17:59.349 "num_base_bdevs": 4, 00:17:59.349 "num_base_bdevs_discovered": 3, 00:17:59.349 "num_base_bdevs_operational": 3, 00:17:59.349 "base_bdevs_list": [ 00:17:59.349 { 00:17:59.349 "name": "spare", 00:17:59.349 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:59.349 "is_configured": true, 00:17:59.349 "data_offset": 2048, 00:17:59.349 "data_size": 63488 00:17:59.349 }, 00:17:59.349 { 00:17:59.349 "name": null, 00:17:59.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.349 "is_configured": false, 00:17:59.349 "data_offset": 0, 00:17:59.349 "data_size": 63488 00:17:59.349 }, 00:17:59.349 { 00:17:59.349 "name": "BaseBdev3", 00:17:59.349 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:59.349 "is_configured": true, 00:17:59.349 "data_offset": 2048, 00:17:59.349 "data_size": 63488 00:17:59.349 }, 00:17:59.349 { 00:17:59.349 "name": "BaseBdev4", 00:17:59.349 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:59.349 "is_configured": true, 00:17:59.349 "data_offset": 2048, 00:17:59.349 "data_size": 63488 00:17:59.349 } 00:17:59.349 ] 00:17:59.349 }' 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.349 "name": "raid_bdev1", 00:17:59.349 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:17:59.349 "strip_size_kb": 0, 00:17:59.349 "state": "online", 00:17:59.349 "raid_level": "raid1", 00:17:59.349 "superblock": true, 00:17:59.349 "num_base_bdevs": 4, 00:17:59.349 "num_base_bdevs_discovered": 3, 00:17:59.349 "num_base_bdevs_operational": 3, 00:17:59.349 "base_bdevs_list": [ 00:17:59.349 { 00:17:59.349 "name": "spare", 00:17:59.349 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:17:59.349 "is_configured": true, 00:17:59.349 "data_offset": 2048, 00:17:59.349 "data_size": 63488 00:17:59.349 }, 00:17:59.349 { 00:17:59.349 "name": null, 00:17:59.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.349 "is_configured": false, 00:17:59.349 "data_offset": 0, 00:17:59.349 "data_size": 63488 00:17:59.349 }, 00:17:59.349 { 00:17:59.349 "name": "BaseBdev3", 00:17:59.349 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:17:59.349 "is_configured": true, 00:17:59.349 "data_offset": 2048, 00:17:59.349 "data_size": 63488 00:17:59.349 }, 00:17:59.349 { 00:17:59.349 "name": "BaseBdev4", 00:17:59.349 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:17:59.349 "is_configured": true, 00:17:59.349 "data_offset": 2048, 00:17:59.349 "data_size": 63488 00:17:59.349 } 00:17:59.349 ] 00:17:59.349 }' 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.349 20:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.869 93.00 IOPS, 279.00 MiB/s [2024-11-26T20:30:53.424Z] 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.869 [2024-11-26 20:30:53.198632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.869 [2024-11-26 20:30:53.198677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.869 00:17:59.869 Latency(us) 00:17:59.869 [2024-11-26T20:30:53.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.869 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:59.869 raid_bdev1 : 8.19 91.19 273.57 0.00 0.00 14932.39 443.58 119968.08 00:17:59.869 [2024-11-26T20:30:53.424Z] =================================================================================================================== 00:17:59.869 [2024-11-26T20:30:53.424Z] Total : 91.19 273.57 0.00 0.00 14932.39 443.58 119968.08 00:17:59.869 [2024-11-26 20:30:53.324970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.869 { 00:17:59.869 "results": [ 00:17:59.869 { 00:17:59.869 "job": "raid_bdev1", 00:17:59.869 "core_mask": "0x1", 00:17:59.869 "workload": "randrw", 00:17:59.869 "percentage": 50, 00:17:59.869 "status": "finished", 00:17:59.869 "queue_depth": 2, 00:17:59.869 "io_size": 3145728, 00:17:59.869 "runtime": 8.191699, 00:17:59.869 "iops": 91.18987404200277, 00:17:59.869 "mibps": 273.5696221260083, 00:17:59.869 "io_failed": 0, 00:17:59.869 "io_timeout": 0, 00:17:59.869 "avg_latency_us": 14932.393815144129, 00:17:59.869 "min_latency_us": 443.58427947598256, 00:17:59.869 "max_latency_us": 119968.08384279476 00:17:59.869 } 00:17:59.869 ], 00:17:59.869 "core_count": 1 00:17:59.869 } 00:17:59.869 [2024-11-26 20:30:53.325189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.869 [2024-11-26 20:30:53.325376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.869 [2024-11-26 20:30:53.325398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:59.869 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:00.130 /dev/nbd0 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:00.130 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.389 1+0 records in 00:18:00.389 1+0 records out 00:18:00.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463268 s, 8.8 MB/s 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.389 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:00.649 /dev/nbd1 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:00.649 20:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.649 1+0 records in 00:18:00.649 1+0 records out 00:18:00.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696306 s, 5.9 MB/s 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.649 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.909 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.169 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:01.428 /dev/nbd1 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.428 1+0 records in 00:18:01.428 1+0 records out 00:18:01.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715149 s, 5.7 MB/s 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.428 20:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:01.688 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.689 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.947 [2024-11-26 20:30:55.479750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.947 [2024-11-26 20:30:55.479910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.947 [2024-11-26 20:30:55.480000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:01.947 [2024-11-26 20:30:55.480079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.947 [2024-11-26 20:30:55.483551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.947 [2024-11-26 20:30:55.483684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.947 [2024-11-26 20:30:55.483920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:01.947 [2024-11-26 20:30:55.484059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.947 [2024-11-26 20:30:55.484428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.947 [2024-11-26 20:30:55.484680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.947 spare 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.947 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.205 [2024-11-26 20:30:55.584709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:02.205 [2024-11-26 20:30:55.584947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:02.205 [2024-11-26 20:30:55.585467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:02.205 [2024-11-26 20:30:55.585813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:02.205 [2024-11-26 20:30:55.585892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:02.205 [2024-11-26 20:30:55.586152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.205 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.205 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:02.205 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.206 "name": "raid_bdev1", 00:18:02.206 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:02.206 "strip_size_kb": 0, 00:18:02.206 "state": "online", 00:18:02.206 "raid_level": "raid1", 00:18:02.206 "superblock": true, 00:18:02.206 "num_base_bdevs": 4, 00:18:02.206 "num_base_bdevs_discovered": 3, 00:18:02.206 "num_base_bdevs_operational": 3, 00:18:02.206 "base_bdevs_list": [ 00:18:02.206 { 00:18:02.206 "name": "spare", 00:18:02.206 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:18:02.206 "is_configured": true, 00:18:02.206 "data_offset": 2048, 00:18:02.206 "data_size": 63488 00:18:02.206 }, 00:18:02.206 { 00:18:02.206 "name": null, 00:18:02.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.206 "is_configured": false, 00:18:02.206 "data_offset": 2048, 00:18:02.206 "data_size": 63488 00:18:02.206 }, 00:18:02.206 { 00:18:02.206 "name": "BaseBdev3", 00:18:02.206 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:02.206 "is_configured": true, 00:18:02.206 "data_offset": 2048, 00:18:02.206 "data_size": 63488 00:18:02.206 }, 00:18:02.206 { 00:18:02.206 "name": "BaseBdev4", 00:18:02.206 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:02.206 "is_configured": true, 00:18:02.206 "data_offset": 2048, 00:18:02.206 "data_size": 63488 00:18:02.206 } 00:18:02.206 ] 00:18:02.206 }' 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.206 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.465 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.465 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.465 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.465 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.465 20:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.465 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.465 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.465 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.465 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.725 "name": "raid_bdev1", 00:18:02.725 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:02.725 "strip_size_kb": 0, 00:18:02.725 "state": "online", 00:18:02.725 "raid_level": "raid1", 00:18:02.725 "superblock": true, 00:18:02.725 "num_base_bdevs": 4, 00:18:02.725 "num_base_bdevs_discovered": 3, 00:18:02.725 "num_base_bdevs_operational": 3, 00:18:02.725 "base_bdevs_list": [ 00:18:02.725 { 00:18:02.725 "name": "spare", 00:18:02.725 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:18:02.725 "is_configured": true, 00:18:02.725 "data_offset": 2048, 00:18:02.725 "data_size": 63488 00:18:02.725 }, 00:18:02.725 { 00:18:02.725 "name": null, 00:18:02.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.725 "is_configured": false, 00:18:02.725 "data_offset": 2048, 00:18:02.725 "data_size": 63488 00:18:02.725 }, 00:18:02.725 { 00:18:02.725 "name": "BaseBdev3", 00:18:02.725 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:02.725 "is_configured": true, 00:18:02.725 "data_offset": 2048, 00:18:02.725 "data_size": 63488 00:18:02.725 }, 00:18:02.725 { 00:18:02.725 "name": "BaseBdev4", 00:18:02.725 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:02.725 "is_configured": true, 00:18:02.725 "data_offset": 2048, 00:18:02.725 "data_size": 63488 00:18:02.725 } 00:18:02.725 ] 00:18:02.725 }' 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:02.725 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.726 [2024-11-26 20:30:56.146994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.726 "name": "raid_bdev1", 00:18:02.726 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:02.726 "strip_size_kb": 0, 00:18:02.726 "state": "online", 00:18:02.726 "raid_level": "raid1", 00:18:02.726 "superblock": true, 00:18:02.726 "num_base_bdevs": 4, 00:18:02.726 "num_base_bdevs_discovered": 2, 00:18:02.726 "num_base_bdevs_operational": 2, 00:18:02.726 "base_bdevs_list": [ 00:18:02.726 { 00:18:02.726 "name": null, 00:18:02.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.726 "is_configured": false, 00:18:02.726 "data_offset": 0, 00:18:02.726 "data_size": 63488 00:18:02.726 }, 00:18:02.726 { 00:18:02.726 "name": null, 00:18:02.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.726 "is_configured": false, 00:18:02.726 "data_offset": 2048, 00:18:02.726 "data_size": 63488 00:18:02.726 }, 00:18:02.726 { 00:18:02.726 "name": "BaseBdev3", 00:18:02.726 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:02.726 "is_configured": true, 00:18:02.726 "data_offset": 2048, 00:18:02.726 "data_size": 63488 00:18:02.726 }, 00:18:02.726 { 00:18:02.726 "name": "BaseBdev4", 00:18:02.726 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:02.726 "is_configured": true, 00:18:02.726 "data_offset": 2048, 00:18:02.726 "data_size": 63488 00:18:02.726 } 00:18:02.726 ] 00:18:02.726 }' 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.726 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.986 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:02.986 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.986 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.986 [2024-11-26 20:30:56.534450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.986 [2024-11-26 20:30:56.534729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:02.986 [2024-11-26 20:30:56.534803] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:02.986 [2024-11-26 20:30:56.534964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.246 [2024-11-26 20:30:56.553025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:03.246 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.246 20:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:03.246 [2024-11-26 20:30:56.555437] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.183 "name": "raid_bdev1", 00:18:04.183 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:04.183 "strip_size_kb": 0, 00:18:04.183 "state": "online", 00:18:04.183 "raid_level": "raid1", 00:18:04.183 "superblock": true, 00:18:04.183 "num_base_bdevs": 4, 00:18:04.183 "num_base_bdevs_discovered": 3, 00:18:04.183 "num_base_bdevs_operational": 3, 00:18:04.183 "process": { 00:18:04.183 "type": "rebuild", 00:18:04.183 "target": "spare", 00:18:04.183 "progress": { 00:18:04.183 "blocks": 20480, 00:18:04.183 "percent": 32 00:18:04.183 } 00:18:04.183 }, 00:18:04.183 "base_bdevs_list": [ 00:18:04.183 { 00:18:04.183 "name": "spare", 00:18:04.183 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:18:04.183 "is_configured": true, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 }, 00:18:04.183 { 00:18:04.183 "name": null, 00:18:04.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.183 "is_configured": false, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 }, 00:18:04.183 { 00:18:04.183 "name": "BaseBdev3", 00:18:04.183 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:04.183 "is_configured": true, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 }, 00:18:04.183 { 00:18:04.183 "name": "BaseBdev4", 00:18:04.183 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:04.183 "is_configured": true, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 } 00:18:04.183 ] 00:18:04.183 }' 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.183 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 [2024-11-26 20:30:57.706621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.443 [2024-11-26 20:30:57.761427] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.443 [2024-11-26 20:30:57.761518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.443 [2024-11-26 20:30:57.761538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.443 [2024-11-26 20:30:57.761550] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.443 "name": "raid_bdev1", 00:18:04.443 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:04.443 "strip_size_kb": 0, 00:18:04.443 "state": "online", 00:18:04.443 "raid_level": "raid1", 00:18:04.443 "superblock": true, 00:18:04.443 "num_base_bdevs": 4, 00:18:04.443 "num_base_bdevs_discovered": 2, 00:18:04.443 "num_base_bdevs_operational": 2, 00:18:04.443 "base_bdevs_list": [ 00:18:04.443 { 00:18:04.443 "name": null, 00:18:04.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.443 "is_configured": false, 00:18:04.443 "data_offset": 0, 00:18:04.443 "data_size": 63488 00:18:04.443 }, 00:18:04.443 { 00:18:04.443 "name": null, 00:18:04.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.443 "is_configured": false, 00:18:04.443 "data_offset": 2048, 00:18:04.443 "data_size": 63488 00:18:04.443 }, 00:18:04.443 { 00:18:04.443 "name": "BaseBdev3", 00:18:04.443 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:04.443 "is_configured": true, 00:18:04.443 "data_offset": 2048, 00:18:04.443 "data_size": 63488 00:18:04.443 }, 00:18:04.443 { 00:18:04.443 "name": "BaseBdev4", 00:18:04.443 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:04.443 "is_configured": true, 00:18:04.443 "data_offset": 2048, 00:18:04.443 "data_size": 63488 00:18:04.443 } 00:18:04.443 ] 00:18:04.443 }' 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.443 20:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.012 20:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.012 20:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.012 20:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.012 [2024-11-26 20:30:58.263798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.012 [2024-11-26 20:30:58.263944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.012 [2024-11-26 20:30:58.264001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:05.012 [2024-11-26 20:30:58.264053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.012 [2024-11-26 20:30:58.264643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.012 [2024-11-26 20:30:58.264712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.012 [2024-11-26 20:30:58.264859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.012 [2024-11-26 20:30:58.264935] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:05.012 [2024-11-26 20:30:58.264988] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:05.012 [2024-11-26 20:30:58.265053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.012 [2024-11-26 20:30:58.282786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:05.012 spare 00:18:05.012 20:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.012 20:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:05.012 [2024-11-26 20:30:58.284978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.948 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.948 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.948 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.948 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.948 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.948 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.949 "name": "raid_bdev1", 00:18:05.949 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:05.949 "strip_size_kb": 0, 00:18:05.949 "state": "online", 00:18:05.949 "raid_level": "raid1", 00:18:05.949 "superblock": true, 00:18:05.949 "num_base_bdevs": 4, 00:18:05.949 "num_base_bdevs_discovered": 3, 00:18:05.949 "num_base_bdevs_operational": 3, 00:18:05.949 "process": { 00:18:05.949 "type": "rebuild", 00:18:05.949 "target": "spare", 00:18:05.949 "progress": { 00:18:05.949 "blocks": 20480, 00:18:05.949 "percent": 32 00:18:05.949 } 00:18:05.949 }, 00:18:05.949 "base_bdevs_list": [ 00:18:05.949 { 00:18:05.949 "name": "spare", 00:18:05.949 "uuid": "4b628f57-80ee-502b-b2ae-7abb5b400c7d", 00:18:05.949 "is_configured": true, 00:18:05.949 "data_offset": 2048, 00:18:05.949 "data_size": 63488 00:18:05.949 }, 00:18:05.949 { 00:18:05.949 "name": null, 00:18:05.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.949 "is_configured": false, 00:18:05.949 "data_offset": 2048, 00:18:05.949 "data_size": 63488 00:18:05.949 }, 00:18:05.949 { 00:18:05.949 "name": "BaseBdev3", 00:18:05.949 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:05.949 "is_configured": true, 00:18:05.949 "data_offset": 2048, 00:18:05.949 "data_size": 63488 00:18:05.949 }, 00:18:05.949 { 00:18:05.949 "name": "BaseBdev4", 00:18:05.949 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:05.949 "is_configured": true, 00:18:05.949 "data_offset": 2048, 00:18:05.949 "data_size": 63488 00:18:05.949 } 00:18:05.949 ] 00:18:05.949 }' 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.949 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.949 [2024-11-26 20:30:59.408041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.949 [2024-11-26 20:30:59.490916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.949 [2024-11-26 20:30:59.490996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.949 [2024-11-26 20:30:59.491015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.949 [2024-11-26 20:30:59.491023] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.207 "name": "raid_bdev1", 00:18:06.207 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:06.207 "strip_size_kb": 0, 00:18:06.207 "state": "online", 00:18:06.207 "raid_level": "raid1", 00:18:06.207 "superblock": true, 00:18:06.207 "num_base_bdevs": 4, 00:18:06.207 "num_base_bdevs_discovered": 2, 00:18:06.207 "num_base_bdevs_operational": 2, 00:18:06.207 "base_bdevs_list": [ 00:18:06.207 { 00:18:06.207 "name": null, 00:18:06.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.207 "is_configured": false, 00:18:06.207 "data_offset": 0, 00:18:06.207 "data_size": 63488 00:18:06.207 }, 00:18:06.207 { 00:18:06.207 "name": null, 00:18:06.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.207 "is_configured": false, 00:18:06.207 "data_offset": 2048, 00:18:06.207 "data_size": 63488 00:18:06.207 }, 00:18:06.207 { 00:18:06.207 "name": "BaseBdev3", 00:18:06.207 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:06.207 "is_configured": true, 00:18:06.207 "data_offset": 2048, 00:18:06.207 "data_size": 63488 00:18:06.207 }, 00:18:06.207 { 00:18:06.207 "name": "BaseBdev4", 00:18:06.207 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:06.207 "is_configured": true, 00:18:06.207 "data_offset": 2048, 00:18:06.207 "data_size": 63488 00:18:06.207 } 00:18:06.207 ] 00:18:06.207 }' 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.207 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.467 "name": "raid_bdev1", 00:18:06.467 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:06.467 "strip_size_kb": 0, 00:18:06.467 "state": "online", 00:18:06.467 "raid_level": "raid1", 00:18:06.467 "superblock": true, 00:18:06.467 "num_base_bdevs": 4, 00:18:06.467 "num_base_bdevs_discovered": 2, 00:18:06.467 "num_base_bdevs_operational": 2, 00:18:06.467 "base_bdevs_list": [ 00:18:06.467 { 00:18:06.467 "name": null, 00:18:06.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.467 "is_configured": false, 00:18:06.467 "data_offset": 0, 00:18:06.467 "data_size": 63488 00:18:06.467 }, 00:18:06.467 { 00:18:06.467 "name": null, 00:18:06.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.467 "is_configured": false, 00:18:06.467 "data_offset": 2048, 00:18:06.467 "data_size": 63488 00:18:06.467 }, 00:18:06.467 { 00:18:06.467 "name": "BaseBdev3", 00:18:06.467 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:06.467 "is_configured": true, 00:18:06.467 "data_offset": 2048, 00:18:06.467 "data_size": 63488 00:18:06.467 }, 00:18:06.467 { 00:18:06.467 "name": "BaseBdev4", 00:18:06.467 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:06.467 "is_configured": true, 00:18:06.467 "data_offset": 2048, 00:18:06.467 "data_size": 63488 00:18:06.467 } 00:18:06.467 ] 00:18:06.467 }' 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.467 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.467 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.726 [2024-11-26 20:31:00.082564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.726 [2024-11-26 20:31:00.082634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.726 [2024-11-26 20:31:00.082659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:06.726 [2024-11-26 20:31:00.082670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.726 [2024-11-26 20:31:00.083167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.726 [2024-11-26 20:31:00.083186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.726 [2024-11-26 20:31:00.083300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:06.726 [2024-11-26 20:31:00.083320] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:06.726 [2024-11-26 20:31:00.083333] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:06.726 [2024-11-26 20:31:00.083345] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:06.726 BaseBdev1 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.706 "name": "raid_bdev1", 00:18:07.706 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:07.706 "strip_size_kb": 0, 00:18:07.706 "state": "online", 00:18:07.706 "raid_level": "raid1", 00:18:07.706 "superblock": true, 00:18:07.706 "num_base_bdevs": 4, 00:18:07.706 "num_base_bdevs_discovered": 2, 00:18:07.706 "num_base_bdevs_operational": 2, 00:18:07.706 "base_bdevs_list": [ 00:18:07.706 { 00:18:07.706 "name": null, 00:18:07.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.706 "is_configured": false, 00:18:07.706 "data_offset": 0, 00:18:07.706 "data_size": 63488 00:18:07.706 }, 00:18:07.706 { 00:18:07.706 "name": null, 00:18:07.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.706 "is_configured": false, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 }, 00:18:07.706 { 00:18:07.706 "name": "BaseBdev3", 00:18:07.706 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:07.706 "is_configured": true, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 }, 00:18:07.706 { 00:18:07.706 "name": "BaseBdev4", 00:18:07.706 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:07.706 "is_configured": true, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 } 00:18:07.706 ] 00:18:07.706 }' 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.706 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.277 "name": "raid_bdev1", 00:18:08.277 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:08.277 "strip_size_kb": 0, 00:18:08.277 "state": "online", 00:18:08.277 "raid_level": "raid1", 00:18:08.277 "superblock": true, 00:18:08.277 "num_base_bdevs": 4, 00:18:08.277 "num_base_bdevs_discovered": 2, 00:18:08.277 "num_base_bdevs_operational": 2, 00:18:08.277 "base_bdevs_list": [ 00:18:08.277 { 00:18:08.277 "name": null, 00:18:08.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.277 "is_configured": false, 00:18:08.277 "data_offset": 0, 00:18:08.277 "data_size": 63488 00:18:08.277 }, 00:18:08.277 { 00:18:08.277 "name": null, 00:18:08.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.277 "is_configured": false, 00:18:08.277 "data_offset": 2048, 00:18:08.277 "data_size": 63488 00:18:08.277 }, 00:18:08.277 { 00:18:08.277 "name": "BaseBdev3", 00:18:08.277 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:08.277 "is_configured": true, 00:18:08.277 "data_offset": 2048, 00:18:08.277 "data_size": 63488 00:18:08.277 }, 00:18:08.277 { 00:18:08.277 "name": "BaseBdev4", 00:18:08.277 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:08.277 "is_configured": true, 00:18:08.277 "data_offset": 2048, 00:18:08.277 "data_size": 63488 00:18:08.277 } 00:18:08.277 ] 00:18:08.277 }' 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.277 [2024-11-26 20:31:01.708372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.277 [2024-11-26 20:31:01.708625] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:08.277 [2024-11-26 20:31:01.708698] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.277 request: 00:18:08.277 { 00:18:08.277 "base_bdev": "BaseBdev1", 00:18:08.277 "raid_bdev": "raid_bdev1", 00:18:08.277 "method": "bdev_raid_add_base_bdev", 00:18:08.277 "req_id": 1 00:18:08.277 } 00:18:08.277 Got JSON-RPC error response 00:18:08.277 response: 00:18:08.277 { 00:18:08.277 "code": -22, 00:18:08.277 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:08.277 } 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.277 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.278 20:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.216 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.474 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.475 "name": "raid_bdev1", 00:18:09.475 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:09.475 "strip_size_kb": 0, 00:18:09.475 "state": "online", 00:18:09.475 "raid_level": "raid1", 00:18:09.475 "superblock": true, 00:18:09.475 "num_base_bdevs": 4, 00:18:09.475 "num_base_bdevs_discovered": 2, 00:18:09.475 "num_base_bdevs_operational": 2, 00:18:09.475 "base_bdevs_list": [ 00:18:09.475 { 00:18:09.475 "name": null, 00:18:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.475 "is_configured": false, 00:18:09.475 "data_offset": 0, 00:18:09.475 "data_size": 63488 00:18:09.475 }, 00:18:09.475 { 00:18:09.475 "name": null, 00:18:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.475 "is_configured": false, 00:18:09.475 "data_offset": 2048, 00:18:09.475 "data_size": 63488 00:18:09.475 }, 00:18:09.475 { 00:18:09.475 "name": "BaseBdev3", 00:18:09.475 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:09.475 "is_configured": true, 00:18:09.475 "data_offset": 2048, 00:18:09.475 "data_size": 63488 00:18:09.475 }, 00:18:09.475 { 00:18:09.475 "name": "BaseBdev4", 00:18:09.475 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:09.475 "is_configured": true, 00:18:09.475 "data_offset": 2048, 00:18:09.475 "data_size": 63488 00:18:09.475 } 00:18:09.475 ] 00:18:09.475 }' 00:18:09.475 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.475 20:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.733 "name": "raid_bdev1", 00:18:09.733 "uuid": "9fa2e252-b3c6-4056-a8c8-50efce50ff8b", 00:18:09.733 "strip_size_kb": 0, 00:18:09.733 "state": "online", 00:18:09.733 "raid_level": "raid1", 00:18:09.733 "superblock": true, 00:18:09.733 "num_base_bdevs": 4, 00:18:09.733 "num_base_bdevs_discovered": 2, 00:18:09.733 "num_base_bdevs_operational": 2, 00:18:09.733 "base_bdevs_list": [ 00:18:09.733 { 00:18:09.733 "name": null, 00:18:09.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.733 "is_configured": false, 00:18:09.733 "data_offset": 0, 00:18:09.733 "data_size": 63488 00:18:09.733 }, 00:18:09.733 { 00:18:09.733 "name": null, 00:18:09.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.733 "is_configured": false, 00:18:09.733 "data_offset": 2048, 00:18:09.733 "data_size": 63488 00:18:09.733 }, 00:18:09.733 { 00:18:09.733 "name": "BaseBdev3", 00:18:09.733 "uuid": "db203ca0-8d43-51c5-b41f-1683686fa183", 00:18:09.733 "is_configured": true, 00:18:09.733 "data_offset": 2048, 00:18:09.733 "data_size": 63488 00:18:09.733 }, 00:18:09.733 { 00:18:09.733 "name": "BaseBdev4", 00:18:09.733 "uuid": "757f4b30-cdd3-5f0f-b6da-3e66d2da4127", 00:18:09.733 "is_configured": true, 00:18:09.733 "data_offset": 2048, 00:18:09.733 "data_size": 63488 00:18:09.733 } 00:18:09.733 ] 00:18:09.733 }' 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79597 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79597 ']' 00:18:09.733 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79597 00:18:09.734 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:09.734 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.991 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79597 00:18:09.991 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.991 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.991 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79597' 00:18:09.991 killing process with pid 79597 00:18:09.991 Received shutdown signal, test time was about 18.224042 seconds 00:18:09.991 00:18:09.991 Latency(us) 00:18:09.991 [2024-11-26T20:31:03.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.991 [2024-11-26T20:31:03.546Z] =================================================================================================================== 00:18:09.991 [2024-11-26T20:31:03.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.991 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79597 00:18:09.991 [2024-11-26 20:31:03.311403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.991 [2024-11-26 20:31:03.311557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.991 20:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79597 00:18:09.991 [2024-11-26 20:31:03.311640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.991 [2024-11-26 20:31:03.311653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:10.250 [2024-11-26 20:31:03.762165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.626 20:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:11.626 00:18:11.626 real 0m22.041s 00:18:11.626 user 0m28.830s 00:18:11.626 sys 0m2.787s 00:18:11.626 20:31:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.626 ************************************ 00:18:11.626 END TEST raid_rebuild_test_sb_io 00:18:11.626 ************************************ 00:18:11.626 20:31:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.626 20:31:05 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:11.626 20:31:05 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:11.626 20:31:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:11.626 20:31:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.626 20:31:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.626 ************************************ 00:18:11.626 START TEST raid5f_state_function_test 00:18:11.626 ************************************ 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:11.626 Process raid pid: 80319 00:18:11.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80319 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80319' 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80319 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80319 ']' 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:11.626 20:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.885 [2024-11-26 20:31:05.261041] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:18:11.885 [2024-11-26 20:31:05.261267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.885 [2024-11-26 20:31:05.438517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.229 [2024-11-26 20:31:05.560427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.496 [2024-11-26 20:31:05.782684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.496 [2024-11-26 20:31:05.782816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.754 [2024-11-26 20:31:06.148434] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.754 [2024-11-26 20:31:06.148556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.754 [2024-11-26 20:31:06.148595] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.754 [2024-11-26 20:31:06.148632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.754 [2024-11-26 20:31:06.148670] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:12.754 [2024-11-26 20:31:06.148711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.754 "name": "Existed_Raid", 00:18:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.754 "strip_size_kb": 64, 00:18:12.754 "state": "configuring", 00:18:12.754 "raid_level": "raid5f", 00:18:12.754 "superblock": false, 00:18:12.754 "num_base_bdevs": 3, 00:18:12.754 "num_base_bdevs_discovered": 0, 00:18:12.754 "num_base_bdevs_operational": 3, 00:18:12.754 "base_bdevs_list": [ 00:18:12.754 { 00:18:12.754 "name": "BaseBdev1", 00:18:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.754 "is_configured": false, 00:18:12.754 "data_offset": 0, 00:18:12.754 "data_size": 0 00:18:12.754 }, 00:18:12.754 { 00:18:12.754 "name": "BaseBdev2", 00:18:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.754 "is_configured": false, 00:18:12.754 "data_offset": 0, 00:18:12.754 "data_size": 0 00:18:12.754 }, 00:18:12.754 { 00:18:12.754 "name": "BaseBdev3", 00:18:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.754 "is_configured": false, 00:18:12.754 "data_offset": 0, 00:18:12.754 "data_size": 0 00:18:12.754 } 00:18:12.754 ] 00:18:12.754 }' 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.754 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 [2024-11-26 20:31:06.603546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.323 [2024-11-26 20:31:06.603591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 [2024-11-26 20:31:06.615568] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.323 [2024-11-26 20:31:06.615635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.323 [2024-11-26 20:31:06.615646] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.323 [2024-11-26 20:31:06.615657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.323 [2024-11-26 20:31:06.615664] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.323 [2024-11-26 20:31:06.615674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 [2024-11-26 20:31:06.664870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.323 BaseBdev1 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 [ 00:18:13.323 { 00:18:13.323 "name": "BaseBdev1", 00:18:13.323 "aliases": [ 00:18:13.323 "2f021803-3fb9-4cbf-97cd-a733d142e7c4" 00:18:13.323 ], 00:18:13.323 "product_name": "Malloc disk", 00:18:13.323 "block_size": 512, 00:18:13.323 "num_blocks": 65536, 00:18:13.323 "uuid": "2f021803-3fb9-4cbf-97cd-a733d142e7c4", 00:18:13.323 "assigned_rate_limits": { 00:18:13.323 "rw_ios_per_sec": 0, 00:18:13.323 "rw_mbytes_per_sec": 0, 00:18:13.323 "r_mbytes_per_sec": 0, 00:18:13.323 "w_mbytes_per_sec": 0 00:18:13.323 }, 00:18:13.323 "claimed": true, 00:18:13.323 "claim_type": "exclusive_write", 00:18:13.323 "zoned": false, 00:18:13.323 "supported_io_types": { 00:18:13.323 "read": true, 00:18:13.323 "write": true, 00:18:13.323 "unmap": true, 00:18:13.323 "flush": true, 00:18:13.323 "reset": true, 00:18:13.323 "nvme_admin": false, 00:18:13.323 "nvme_io": false, 00:18:13.323 "nvme_io_md": false, 00:18:13.323 "write_zeroes": true, 00:18:13.323 "zcopy": true, 00:18:13.323 "get_zone_info": false, 00:18:13.323 "zone_management": false, 00:18:13.323 "zone_append": false, 00:18:13.323 "compare": false, 00:18:13.323 "compare_and_write": false, 00:18:13.323 "abort": true, 00:18:13.323 "seek_hole": false, 00:18:13.323 "seek_data": false, 00:18:13.323 "copy": true, 00:18:13.323 "nvme_iov_md": false 00:18:13.323 }, 00:18:13.323 "memory_domains": [ 00:18:13.323 { 00:18:13.323 "dma_device_id": "system", 00:18:13.323 "dma_device_type": 1 00:18:13.323 }, 00:18:13.323 { 00:18:13.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.323 "dma_device_type": 2 00:18:13.323 } 00:18:13.323 ], 00:18:13.323 "driver_specific": {} 00:18:13.323 } 00:18:13.323 ] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.323 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.323 "name": "Existed_Raid", 00:18:13.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.323 "strip_size_kb": 64, 00:18:13.323 "state": "configuring", 00:18:13.323 "raid_level": "raid5f", 00:18:13.323 "superblock": false, 00:18:13.323 "num_base_bdevs": 3, 00:18:13.323 "num_base_bdevs_discovered": 1, 00:18:13.323 "num_base_bdevs_operational": 3, 00:18:13.323 "base_bdevs_list": [ 00:18:13.323 { 00:18:13.323 "name": "BaseBdev1", 00:18:13.323 "uuid": "2f021803-3fb9-4cbf-97cd-a733d142e7c4", 00:18:13.323 "is_configured": true, 00:18:13.323 "data_offset": 0, 00:18:13.323 "data_size": 65536 00:18:13.323 }, 00:18:13.323 { 00:18:13.323 "name": "BaseBdev2", 00:18:13.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.323 "is_configured": false, 00:18:13.323 "data_offset": 0, 00:18:13.323 "data_size": 0 00:18:13.323 }, 00:18:13.323 { 00:18:13.323 "name": "BaseBdev3", 00:18:13.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.323 "is_configured": false, 00:18:13.324 "data_offset": 0, 00:18:13.324 "data_size": 0 00:18:13.324 } 00:18:13.324 ] 00:18:13.324 }' 00:18:13.324 20:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.324 20:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.890 [2024-11-26 20:31:07.180070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.890 [2024-11-26 20:31:07.180134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.890 [2024-11-26 20:31:07.192107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.890 [2024-11-26 20:31:07.194178] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.890 [2024-11-26 20:31:07.194227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.890 [2024-11-26 20:31:07.194249] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.890 [2024-11-26 20:31:07.194260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.890 "name": "Existed_Raid", 00:18:13.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.890 "strip_size_kb": 64, 00:18:13.890 "state": "configuring", 00:18:13.890 "raid_level": "raid5f", 00:18:13.890 "superblock": false, 00:18:13.890 "num_base_bdevs": 3, 00:18:13.890 "num_base_bdevs_discovered": 1, 00:18:13.890 "num_base_bdevs_operational": 3, 00:18:13.890 "base_bdevs_list": [ 00:18:13.890 { 00:18:13.890 "name": "BaseBdev1", 00:18:13.890 "uuid": "2f021803-3fb9-4cbf-97cd-a733d142e7c4", 00:18:13.890 "is_configured": true, 00:18:13.890 "data_offset": 0, 00:18:13.890 "data_size": 65536 00:18:13.890 }, 00:18:13.890 { 00:18:13.890 "name": "BaseBdev2", 00:18:13.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.890 "is_configured": false, 00:18:13.890 "data_offset": 0, 00:18:13.890 "data_size": 0 00:18:13.890 }, 00:18:13.890 { 00:18:13.890 "name": "BaseBdev3", 00:18:13.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.890 "is_configured": false, 00:18:13.890 "data_offset": 0, 00:18:13.890 "data_size": 0 00:18:13.890 } 00:18:13.890 ] 00:18:13.890 }' 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.890 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.148 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.148 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.148 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.148 [2024-11-26 20:31:07.699316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.406 BaseBdev2 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.406 [ 00:18:14.406 { 00:18:14.406 "name": "BaseBdev2", 00:18:14.406 "aliases": [ 00:18:14.406 "694e43a3-2326-49b0-823b-418b1b77464f" 00:18:14.406 ], 00:18:14.406 "product_name": "Malloc disk", 00:18:14.406 "block_size": 512, 00:18:14.406 "num_blocks": 65536, 00:18:14.406 "uuid": "694e43a3-2326-49b0-823b-418b1b77464f", 00:18:14.406 "assigned_rate_limits": { 00:18:14.406 "rw_ios_per_sec": 0, 00:18:14.406 "rw_mbytes_per_sec": 0, 00:18:14.406 "r_mbytes_per_sec": 0, 00:18:14.406 "w_mbytes_per_sec": 0 00:18:14.406 }, 00:18:14.406 "claimed": true, 00:18:14.406 "claim_type": "exclusive_write", 00:18:14.406 "zoned": false, 00:18:14.406 "supported_io_types": { 00:18:14.406 "read": true, 00:18:14.406 "write": true, 00:18:14.406 "unmap": true, 00:18:14.406 "flush": true, 00:18:14.406 "reset": true, 00:18:14.406 "nvme_admin": false, 00:18:14.406 "nvme_io": false, 00:18:14.406 "nvme_io_md": false, 00:18:14.406 "write_zeroes": true, 00:18:14.406 "zcopy": true, 00:18:14.406 "get_zone_info": false, 00:18:14.406 "zone_management": false, 00:18:14.406 "zone_append": false, 00:18:14.406 "compare": false, 00:18:14.406 "compare_and_write": false, 00:18:14.406 "abort": true, 00:18:14.406 "seek_hole": false, 00:18:14.406 "seek_data": false, 00:18:14.406 "copy": true, 00:18:14.406 "nvme_iov_md": false 00:18:14.406 }, 00:18:14.406 "memory_domains": [ 00:18:14.406 { 00:18:14.406 "dma_device_id": "system", 00:18:14.406 "dma_device_type": 1 00:18:14.406 }, 00:18:14.406 { 00:18:14.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.406 "dma_device_type": 2 00:18:14.406 } 00:18:14.406 ], 00:18:14.406 "driver_specific": {} 00:18:14.406 } 00:18:14.406 ] 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.406 "name": "Existed_Raid", 00:18:14.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.406 "strip_size_kb": 64, 00:18:14.406 "state": "configuring", 00:18:14.406 "raid_level": "raid5f", 00:18:14.406 "superblock": false, 00:18:14.406 "num_base_bdevs": 3, 00:18:14.406 "num_base_bdevs_discovered": 2, 00:18:14.406 "num_base_bdevs_operational": 3, 00:18:14.406 "base_bdevs_list": [ 00:18:14.406 { 00:18:14.406 "name": "BaseBdev1", 00:18:14.406 "uuid": "2f021803-3fb9-4cbf-97cd-a733d142e7c4", 00:18:14.406 "is_configured": true, 00:18:14.406 "data_offset": 0, 00:18:14.406 "data_size": 65536 00:18:14.406 }, 00:18:14.406 { 00:18:14.406 "name": "BaseBdev2", 00:18:14.406 "uuid": "694e43a3-2326-49b0-823b-418b1b77464f", 00:18:14.406 "is_configured": true, 00:18:14.406 "data_offset": 0, 00:18:14.406 "data_size": 65536 00:18:14.406 }, 00:18:14.406 { 00:18:14.406 "name": "BaseBdev3", 00:18:14.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.406 "is_configured": false, 00:18:14.406 "data_offset": 0, 00:18:14.406 "data_size": 0 00:18:14.406 } 00:18:14.406 ] 00:18:14.406 }' 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.406 20:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.665 [2024-11-26 20:31:08.205556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.665 [2024-11-26 20:31:08.205635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:14.665 [2024-11-26 20:31:08.205652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:14.665 [2024-11-26 20:31:08.205940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:14.665 [2024-11-26 20:31:08.212073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:14.665 [2024-11-26 20:31:08.212099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:14.665 [2024-11-26 20:31:08.212461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.665 BaseBdev3 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.665 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.984 [ 00:18:14.984 { 00:18:14.984 "name": "BaseBdev3", 00:18:14.984 "aliases": [ 00:18:14.984 "55f07c1d-f12f-4f3c-b067-cdf849c382ed" 00:18:14.984 ], 00:18:14.984 "product_name": "Malloc disk", 00:18:14.984 "block_size": 512, 00:18:14.984 "num_blocks": 65536, 00:18:14.984 "uuid": "55f07c1d-f12f-4f3c-b067-cdf849c382ed", 00:18:14.984 "assigned_rate_limits": { 00:18:14.984 "rw_ios_per_sec": 0, 00:18:14.984 "rw_mbytes_per_sec": 0, 00:18:14.984 "r_mbytes_per_sec": 0, 00:18:14.984 "w_mbytes_per_sec": 0 00:18:14.984 }, 00:18:14.984 "claimed": true, 00:18:14.984 "claim_type": "exclusive_write", 00:18:14.984 "zoned": false, 00:18:14.984 "supported_io_types": { 00:18:14.984 "read": true, 00:18:14.984 "write": true, 00:18:14.984 "unmap": true, 00:18:14.984 "flush": true, 00:18:14.984 "reset": true, 00:18:14.984 "nvme_admin": false, 00:18:14.984 "nvme_io": false, 00:18:14.984 "nvme_io_md": false, 00:18:14.984 "write_zeroes": true, 00:18:14.984 "zcopy": true, 00:18:14.984 "get_zone_info": false, 00:18:14.984 "zone_management": false, 00:18:14.984 "zone_append": false, 00:18:14.984 "compare": false, 00:18:14.984 "compare_and_write": false, 00:18:14.984 "abort": true, 00:18:14.984 "seek_hole": false, 00:18:14.984 "seek_data": false, 00:18:14.984 "copy": true, 00:18:14.984 "nvme_iov_md": false 00:18:14.984 }, 00:18:14.984 "memory_domains": [ 00:18:14.984 { 00:18:14.984 "dma_device_id": "system", 00:18:14.984 "dma_device_type": 1 00:18:14.984 }, 00:18:14.984 { 00:18:14.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.984 "dma_device_type": 2 00:18:14.984 } 00:18:14.984 ], 00:18:14.984 "driver_specific": {} 00:18:14.984 } 00:18:14.984 ] 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.984 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.985 "name": "Existed_Raid", 00:18:14.985 "uuid": "76965144-dc9b-48e1-b5e3-77d6e4d2b883", 00:18:14.985 "strip_size_kb": 64, 00:18:14.985 "state": "online", 00:18:14.985 "raid_level": "raid5f", 00:18:14.985 "superblock": false, 00:18:14.985 "num_base_bdevs": 3, 00:18:14.985 "num_base_bdevs_discovered": 3, 00:18:14.985 "num_base_bdevs_operational": 3, 00:18:14.985 "base_bdevs_list": [ 00:18:14.985 { 00:18:14.985 "name": "BaseBdev1", 00:18:14.985 "uuid": "2f021803-3fb9-4cbf-97cd-a733d142e7c4", 00:18:14.985 "is_configured": true, 00:18:14.985 "data_offset": 0, 00:18:14.985 "data_size": 65536 00:18:14.985 }, 00:18:14.985 { 00:18:14.985 "name": "BaseBdev2", 00:18:14.985 "uuid": "694e43a3-2326-49b0-823b-418b1b77464f", 00:18:14.985 "is_configured": true, 00:18:14.985 "data_offset": 0, 00:18:14.985 "data_size": 65536 00:18:14.985 }, 00:18:14.985 { 00:18:14.985 "name": "BaseBdev3", 00:18:14.985 "uuid": "55f07c1d-f12f-4f3c-b067-cdf849c382ed", 00:18:14.985 "is_configured": true, 00:18:14.985 "data_offset": 0, 00:18:14.985 "data_size": 65536 00:18:14.985 } 00:18:14.985 ] 00:18:14.985 }' 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.985 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.243 [2024-11-26 20:31:08.759237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.243 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.502 "name": "Existed_Raid", 00:18:15.502 "aliases": [ 00:18:15.502 "76965144-dc9b-48e1-b5e3-77d6e4d2b883" 00:18:15.502 ], 00:18:15.502 "product_name": "Raid Volume", 00:18:15.502 "block_size": 512, 00:18:15.502 "num_blocks": 131072, 00:18:15.502 "uuid": "76965144-dc9b-48e1-b5e3-77d6e4d2b883", 00:18:15.502 "assigned_rate_limits": { 00:18:15.502 "rw_ios_per_sec": 0, 00:18:15.502 "rw_mbytes_per_sec": 0, 00:18:15.502 "r_mbytes_per_sec": 0, 00:18:15.502 "w_mbytes_per_sec": 0 00:18:15.502 }, 00:18:15.502 "claimed": false, 00:18:15.502 "zoned": false, 00:18:15.502 "supported_io_types": { 00:18:15.502 "read": true, 00:18:15.502 "write": true, 00:18:15.502 "unmap": false, 00:18:15.502 "flush": false, 00:18:15.502 "reset": true, 00:18:15.502 "nvme_admin": false, 00:18:15.502 "nvme_io": false, 00:18:15.502 "nvme_io_md": false, 00:18:15.502 "write_zeroes": true, 00:18:15.502 "zcopy": false, 00:18:15.502 "get_zone_info": false, 00:18:15.502 "zone_management": false, 00:18:15.502 "zone_append": false, 00:18:15.502 "compare": false, 00:18:15.502 "compare_and_write": false, 00:18:15.502 "abort": false, 00:18:15.502 "seek_hole": false, 00:18:15.502 "seek_data": false, 00:18:15.502 "copy": false, 00:18:15.502 "nvme_iov_md": false 00:18:15.502 }, 00:18:15.502 "driver_specific": { 00:18:15.502 "raid": { 00:18:15.502 "uuid": "76965144-dc9b-48e1-b5e3-77d6e4d2b883", 00:18:15.502 "strip_size_kb": 64, 00:18:15.502 "state": "online", 00:18:15.502 "raid_level": "raid5f", 00:18:15.502 "superblock": false, 00:18:15.502 "num_base_bdevs": 3, 00:18:15.502 "num_base_bdevs_discovered": 3, 00:18:15.502 "num_base_bdevs_operational": 3, 00:18:15.502 "base_bdevs_list": [ 00:18:15.502 { 00:18:15.502 "name": "BaseBdev1", 00:18:15.502 "uuid": "2f021803-3fb9-4cbf-97cd-a733d142e7c4", 00:18:15.502 "is_configured": true, 00:18:15.502 "data_offset": 0, 00:18:15.502 "data_size": 65536 00:18:15.502 }, 00:18:15.502 { 00:18:15.502 "name": "BaseBdev2", 00:18:15.502 "uuid": "694e43a3-2326-49b0-823b-418b1b77464f", 00:18:15.502 "is_configured": true, 00:18:15.502 "data_offset": 0, 00:18:15.502 "data_size": 65536 00:18:15.502 }, 00:18:15.502 { 00:18:15.502 "name": "BaseBdev3", 00:18:15.502 "uuid": "55f07c1d-f12f-4f3c-b067-cdf849c382ed", 00:18:15.502 "is_configured": true, 00:18:15.502 "data_offset": 0, 00:18:15.502 "data_size": 65536 00:18:15.502 } 00:18:15.502 ] 00:18:15.502 } 00:18:15.502 } 00:18:15.502 }' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:15.502 BaseBdev2 00:18:15.502 BaseBdev3' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.502 20:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.502 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.502 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.502 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.502 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:15.502 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.502 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.503 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.503 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.763 [2024-11-26 20:31:09.070566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.763 "name": "Existed_Raid", 00:18:15.763 "uuid": "76965144-dc9b-48e1-b5e3-77d6e4d2b883", 00:18:15.763 "strip_size_kb": 64, 00:18:15.763 "state": "online", 00:18:15.763 "raid_level": "raid5f", 00:18:15.763 "superblock": false, 00:18:15.763 "num_base_bdevs": 3, 00:18:15.763 "num_base_bdevs_discovered": 2, 00:18:15.763 "num_base_bdevs_operational": 2, 00:18:15.763 "base_bdevs_list": [ 00:18:15.763 { 00:18:15.763 "name": null, 00:18:15.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.763 "is_configured": false, 00:18:15.763 "data_offset": 0, 00:18:15.763 "data_size": 65536 00:18:15.763 }, 00:18:15.763 { 00:18:15.763 "name": "BaseBdev2", 00:18:15.763 "uuid": "694e43a3-2326-49b0-823b-418b1b77464f", 00:18:15.763 "is_configured": true, 00:18:15.763 "data_offset": 0, 00:18:15.763 "data_size": 65536 00:18:15.763 }, 00:18:15.763 { 00:18:15.763 "name": "BaseBdev3", 00:18:15.763 "uuid": "55f07c1d-f12f-4f3c-b067-cdf849c382ed", 00:18:15.763 "is_configured": true, 00:18:15.763 "data_offset": 0, 00:18:15.763 "data_size": 65536 00:18:15.763 } 00:18:15.763 ] 00:18:15.763 }' 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.763 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.330 [2024-11-26 20:31:09.704111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.330 [2024-11-26 20:31:09.704225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.330 [2024-11-26 20:31:09.807855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.330 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.330 [2024-11-26 20:31:09.871797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.330 [2024-11-26 20:31:09.871861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.588 20:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.588 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.589 BaseBdev2 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.589 [ 00:18:16.589 { 00:18:16.589 "name": "BaseBdev2", 00:18:16.589 "aliases": [ 00:18:16.589 "4d63a608-801f-42be-b5b9-65ee9bf4ba01" 00:18:16.589 ], 00:18:16.589 "product_name": "Malloc disk", 00:18:16.589 "block_size": 512, 00:18:16.589 "num_blocks": 65536, 00:18:16.589 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:16.589 "assigned_rate_limits": { 00:18:16.589 "rw_ios_per_sec": 0, 00:18:16.589 "rw_mbytes_per_sec": 0, 00:18:16.589 "r_mbytes_per_sec": 0, 00:18:16.589 "w_mbytes_per_sec": 0 00:18:16.589 }, 00:18:16.589 "claimed": false, 00:18:16.589 "zoned": false, 00:18:16.589 "supported_io_types": { 00:18:16.589 "read": true, 00:18:16.589 "write": true, 00:18:16.589 "unmap": true, 00:18:16.589 "flush": true, 00:18:16.589 "reset": true, 00:18:16.589 "nvme_admin": false, 00:18:16.589 "nvme_io": false, 00:18:16.589 "nvme_io_md": false, 00:18:16.589 "write_zeroes": true, 00:18:16.589 "zcopy": true, 00:18:16.589 "get_zone_info": false, 00:18:16.589 "zone_management": false, 00:18:16.589 "zone_append": false, 00:18:16.589 "compare": false, 00:18:16.589 "compare_and_write": false, 00:18:16.589 "abort": true, 00:18:16.589 "seek_hole": false, 00:18:16.589 "seek_data": false, 00:18:16.589 "copy": true, 00:18:16.589 "nvme_iov_md": false 00:18:16.589 }, 00:18:16.589 "memory_domains": [ 00:18:16.589 { 00:18:16.589 "dma_device_id": "system", 00:18:16.589 "dma_device_type": 1 00:18:16.589 }, 00:18:16.589 { 00:18:16.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.589 "dma_device_type": 2 00:18:16.589 } 00:18:16.589 ], 00:18:16.589 "driver_specific": {} 00:18:16.589 } 00:18:16.589 ] 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.589 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 BaseBdev3 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.849 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 [ 00:18:16.849 { 00:18:16.849 "name": "BaseBdev3", 00:18:16.849 "aliases": [ 00:18:16.849 "a829a472-152d-41ce-bbbf-374abcfe5e78" 00:18:16.849 ], 00:18:16.849 "product_name": "Malloc disk", 00:18:16.849 "block_size": 512, 00:18:16.849 "num_blocks": 65536, 00:18:16.849 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:16.849 "assigned_rate_limits": { 00:18:16.849 "rw_ios_per_sec": 0, 00:18:16.849 "rw_mbytes_per_sec": 0, 00:18:16.849 "r_mbytes_per_sec": 0, 00:18:16.849 "w_mbytes_per_sec": 0 00:18:16.849 }, 00:18:16.849 "claimed": false, 00:18:16.849 "zoned": false, 00:18:16.849 "supported_io_types": { 00:18:16.849 "read": true, 00:18:16.849 "write": true, 00:18:16.849 "unmap": true, 00:18:16.849 "flush": true, 00:18:16.849 "reset": true, 00:18:16.849 "nvme_admin": false, 00:18:16.849 "nvme_io": false, 00:18:16.849 "nvme_io_md": false, 00:18:16.849 "write_zeroes": true, 00:18:16.849 "zcopy": true, 00:18:16.849 "get_zone_info": false, 00:18:16.850 "zone_management": false, 00:18:16.850 "zone_append": false, 00:18:16.850 "compare": false, 00:18:16.850 "compare_and_write": false, 00:18:16.850 "abort": true, 00:18:16.850 "seek_hole": false, 00:18:16.850 "seek_data": false, 00:18:16.850 "copy": true, 00:18:16.850 "nvme_iov_md": false 00:18:16.850 }, 00:18:16.850 "memory_domains": [ 00:18:16.850 { 00:18:16.850 "dma_device_id": "system", 00:18:16.850 "dma_device_type": 1 00:18:16.850 }, 00:18:16.850 { 00:18:16.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.850 "dma_device_type": 2 00:18:16.850 } 00:18:16.850 ], 00:18:16.850 "driver_specific": {} 00:18:16.850 } 00:18:16.850 ] 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.850 [2024-11-26 20:31:10.217403] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.850 [2024-11-26 20:31:10.217450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.850 [2024-11-26 20:31:10.217494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.850 [2024-11-26 20:31:10.219509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.850 "name": "Existed_Raid", 00:18:16.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.850 "strip_size_kb": 64, 00:18:16.850 "state": "configuring", 00:18:16.850 "raid_level": "raid5f", 00:18:16.850 "superblock": false, 00:18:16.850 "num_base_bdevs": 3, 00:18:16.850 "num_base_bdevs_discovered": 2, 00:18:16.850 "num_base_bdevs_operational": 3, 00:18:16.850 "base_bdevs_list": [ 00:18:16.850 { 00:18:16.850 "name": "BaseBdev1", 00:18:16.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.850 "is_configured": false, 00:18:16.850 "data_offset": 0, 00:18:16.850 "data_size": 0 00:18:16.850 }, 00:18:16.850 { 00:18:16.850 "name": "BaseBdev2", 00:18:16.850 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:16.850 "is_configured": true, 00:18:16.850 "data_offset": 0, 00:18:16.850 "data_size": 65536 00:18:16.850 }, 00:18:16.850 { 00:18:16.850 "name": "BaseBdev3", 00:18:16.850 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:16.850 "is_configured": true, 00:18:16.850 "data_offset": 0, 00:18:16.850 "data_size": 65536 00:18:16.850 } 00:18:16.850 ] 00:18:16.850 }' 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.850 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.419 [2024-11-26 20:31:10.696709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.419 "name": "Existed_Raid", 00:18:17.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.419 "strip_size_kb": 64, 00:18:17.419 "state": "configuring", 00:18:17.419 "raid_level": "raid5f", 00:18:17.419 "superblock": false, 00:18:17.419 "num_base_bdevs": 3, 00:18:17.419 "num_base_bdevs_discovered": 1, 00:18:17.419 "num_base_bdevs_operational": 3, 00:18:17.419 "base_bdevs_list": [ 00:18:17.419 { 00:18:17.419 "name": "BaseBdev1", 00:18:17.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.419 "is_configured": false, 00:18:17.419 "data_offset": 0, 00:18:17.419 "data_size": 0 00:18:17.419 }, 00:18:17.419 { 00:18:17.419 "name": null, 00:18:17.419 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:17.419 "is_configured": false, 00:18:17.419 "data_offset": 0, 00:18:17.419 "data_size": 65536 00:18:17.419 }, 00:18:17.419 { 00:18:17.419 "name": "BaseBdev3", 00:18:17.419 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:17.419 "is_configured": true, 00:18:17.419 "data_offset": 0, 00:18:17.419 "data_size": 65536 00:18:17.419 } 00:18:17.419 ] 00:18:17.419 }' 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.419 20:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.678 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.679 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.938 [2024-11-26 20:31:11.257306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.938 BaseBdev1 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.938 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.938 [ 00:18:17.938 { 00:18:17.938 "name": "BaseBdev1", 00:18:17.938 "aliases": [ 00:18:17.938 "d92d2692-c1c1-47df-b3ed-03293aa7c885" 00:18:17.938 ], 00:18:17.938 "product_name": "Malloc disk", 00:18:17.938 "block_size": 512, 00:18:17.938 "num_blocks": 65536, 00:18:17.938 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:17.938 "assigned_rate_limits": { 00:18:17.939 "rw_ios_per_sec": 0, 00:18:17.939 "rw_mbytes_per_sec": 0, 00:18:17.939 "r_mbytes_per_sec": 0, 00:18:17.939 "w_mbytes_per_sec": 0 00:18:17.939 }, 00:18:17.939 "claimed": true, 00:18:17.939 "claim_type": "exclusive_write", 00:18:17.939 "zoned": false, 00:18:17.939 "supported_io_types": { 00:18:17.939 "read": true, 00:18:17.939 "write": true, 00:18:17.939 "unmap": true, 00:18:17.939 "flush": true, 00:18:17.939 "reset": true, 00:18:17.939 "nvme_admin": false, 00:18:17.939 "nvme_io": false, 00:18:17.939 "nvme_io_md": false, 00:18:17.939 "write_zeroes": true, 00:18:17.939 "zcopy": true, 00:18:17.939 "get_zone_info": false, 00:18:17.939 "zone_management": false, 00:18:17.939 "zone_append": false, 00:18:17.939 "compare": false, 00:18:17.939 "compare_and_write": false, 00:18:17.939 "abort": true, 00:18:17.939 "seek_hole": false, 00:18:17.939 "seek_data": false, 00:18:17.939 "copy": true, 00:18:17.939 "nvme_iov_md": false 00:18:17.939 }, 00:18:17.939 "memory_domains": [ 00:18:17.939 { 00:18:17.939 "dma_device_id": "system", 00:18:17.939 "dma_device_type": 1 00:18:17.939 }, 00:18:17.939 { 00:18:17.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.939 "dma_device_type": 2 00:18:17.939 } 00:18:17.939 ], 00:18:17.939 "driver_specific": {} 00:18:17.939 } 00:18:17.939 ] 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.939 "name": "Existed_Raid", 00:18:17.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.939 "strip_size_kb": 64, 00:18:17.939 "state": "configuring", 00:18:17.939 "raid_level": "raid5f", 00:18:17.939 "superblock": false, 00:18:17.939 "num_base_bdevs": 3, 00:18:17.939 "num_base_bdevs_discovered": 2, 00:18:17.939 "num_base_bdevs_operational": 3, 00:18:17.939 "base_bdevs_list": [ 00:18:17.939 { 00:18:17.939 "name": "BaseBdev1", 00:18:17.939 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:17.939 "is_configured": true, 00:18:17.939 "data_offset": 0, 00:18:17.939 "data_size": 65536 00:18:17.939 }, 00:18:17.939 { 00:18:17.939 "name": null, 00:18:17.939 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:17.939 "is_configured": false, 00:18:17.939 "data_offset": 0, 00:18:17.939 "data_size": 65536 00:18:17.939 }, 00:18:17.939 { 00:18:17.939 "name": "BaseBdev3", 00:18:17.939 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:17.939 "is_configured": true, 00:18:17.939 "data_offset": 0, 00:18:17.939 "data_size": 65536 00:18:17.939 } 00:18:17.939 ] 00:18:17.939 }' 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.939 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.198 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:18.198 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.198 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.198 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.198 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.457 [2024-11-26 20:31:11.760536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.457 "name": "Existed_Raid", 00:18:18.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.457 "strip_size_kb": 64, 00:18:18.457 "state": "configuring", 00:18:18.457 "raid_level": "raid5f", 00:18:18.457 "superblock": false, 00:18:18.457 "num_base_bdevs": 3, 00:18:18.457 "num_base_bdevs_discovered": 1, 00:18:18.457 "num_base_bdevs_operational": 3, 00:18:18.457 "base_bdevs_list": [ 00:18:18.457 { 00:18:18.457 "name": "BaseBdev1", 00:18:18.457 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:18.457 "is_configured": true, 00:18:18.457 "data_offset": 0, 00:18:18.457 "data_size": 65536 00:18:18.457 }, 00:18:18.457 { 00:18:18.457 "name": null, 00:18:18.457 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:18.457 "is_configured": false, 00:18:18.457 "data_offset": 0, 00:18:18.457 "data_size": 65536 00:18:18.457 }, 00:18:18.457 { 00:18:18.457 "name": null, 00:18:18.457 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:18.457 "is_configured": false, 00:18:18.457 "data_offset": 0, 00:18:18.457 "data_size": 65536 00:18:18.457 } 00:18:18.457 ] 00:18:18.457 }' 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.457 20:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.716 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:18.716 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.716 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.716 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.716 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.975 [2024-11-26 20:31:12.275756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.975 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.975 "name": "Existed_Raid", 00:18:18.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.975 "strip_size_kb": 64, 00:18:18.975 "state": "configuring", 00:18:18.976 "raid_level": "raid5f", 00:18:18.976 "superblock": false, 00:18:18.976 "num_base_bdevs": 3, 00:18:18.976 "num_base_bdevs_discovered": 2, 00:18:18.976 "num_base_bdevs_operational": 3, 00:18:18.976 "base_bdevs_list": [ 00:18:18.976 { 00:18:18.976 "name": "BaseBdev1", 00:18:18.976 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:18.976 "is_configured": true, 00:18:18.976 "data_offset": 0, 00:18:18.976 "data_size": 65536 00:18:18.976 }, 00:18:18.976 { 00:18:18.976 "name": null, 00:18:18.976 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:18.976 "is_configured": false, 00:18:18.976 "data_offset": 0, 00:18:18.976 "data_size": 65536 00:18:18.976 }, 00:18:18.976 { 00:18:18.976 "name": "BaseBdev3", 00:18:18.976 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:18.976 "is_configured": true, 00:18:18.976 "data_offset": 0, 00:18:18.976 "data_size": 65536 00:18:18.976 } 00:18:18.976 ] 00:18:18.976 }' 00:18:18.976 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.976 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.235 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.235 [2024-11-26 20:31:12.738972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.494 "name": "Existed_Raid", 00:18:19.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.494 "strip_size_kb": 64, 00:18:19.494 "state": "configuring", 00:18:19.494 "raid_level": "raid5f", 00:18:19.494 "superblock": false, 00:18:19.494 "num_base_bdevs": 3, 00:18:19.494 "num_base_bdevs_discovered": 1, 00:18:19.494 "num_base_bdevs_operational": 3, 00:18:19.494 "base_bdevs_list": [ 00:18:19.494 { 00:18:19.494 "name": null, 00:18:19.494 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:19.494 "is_configured": false, 00:18:19.494 "data_offset": 0, 00:18:19.494 "data_size": 65536 00:18:19.494 }, 00:18:19.494 { 00:18:19.494 "name": null, 00:18:19.494 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:19.494 "is_configured": false, 00:18:19.494 "data_offset": 0, 00:18:19.494 "data_size": 65536 00:18:19.494 }, 00:18:19.494 { 00:18:19.494 "name": "BaseBdev3", 00:18:19.494 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:19.494 "is_configured": true, 00:18:19.494 "data_offset": 0, 00:18:19.494 "data_size": 65536 00:18:19.494 } 00:18:19.494 ] 00:18:19.494 }' 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.494 20:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.753 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.753 [2024-11-26 20:31:13.303081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.013 "name": "Existed_Raid", 00:18:20.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.013 "strip_size_kb": 64, 00:18:20.013 "state": "configuring", 00:18:20.013 "raid_level": "raid5f", 00:18:20.013 "superblock": false, 00:18:20.013 "num_base_bdevs": 3, 00:18:20.013 "num_base_bdevs_discovered": 2, 00:18:20.013 "num_base_bdevs_operational": 3, 00:18:20.013 "base_bdevs_list": [ 00:18:20.013 { 00:18:20.013 "name": null, 00:18:20.013 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:20.013 "is_configured": false, 00:18:20.013 "data_offset": 0, 00:18:20.013 "data_size": 65536 00:18:20.013 }, 00:18:20.013 { 00:18:20.013 "name": "BaseBdev2", 00:18:20.013 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:20.013 "is_configured": true, 00:18:20.013 "data_offset": 0, 00:18:20.013 "data_size": 65536 00:18:20.013 }, 00:18:20.013 { 00:18:20.013 "name": "BaseBdev3", 00:18:20.013 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:20.013 "is_configured": true, 00:18:20.013 "data_offset": 0, 00:18:20.013 "data_size": 65536 00:18:20.013 } 00:18:20.013 ] 00:18:20.013 }' 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.013 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.272 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.272 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:20.272 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.272 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.272 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d92d2692-c1c1-47df-b3ed-03293aa7c885 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.531 [2024-11-26 20:31:13.927854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:20.531 [2024-11-26 20:31:13.927917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:20.531 [2024-11-26 20:31:13.927928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:20.531 [2024-11-26 20:31:13.928201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:20.531 [2024-11-26 20:31:13.934445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:20.531 [2024-11-26 20:31:13.934471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:20.531 [2024-11-26 20:31:13.934788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.531 NewBaseBdev 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.531 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.531 [ 00:18:20.531 { 00:18:20.531 "name": "NewBaseBdev", 00:18:20.531 "aliases": [ 00:18:20.531 "d92d2692-c1c1-47df-b3ed-03293aa7c885" 00:18:20.531 ], 00:18:20.531 "product_name": "Malloc disk", 00:18:20.531 "block_size": 512, 00:18:20.531 "num_blocks": 65536, 00:18:20.531 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:20.531 "assigned_rate_limits": { 00:18:20.531 "rw_ios_per_sec": 0, 00:18:20.531 "rw_mbytes_per_sec": 0, 00:18:20.531 "r_mbytes_per_sec": 0, 00:18:20.531 "w_mbytes_per_sec": 0 00:18:20.531 }, 00:18:20.531 "claimed": true, 00:18:20.531 "claim_type": "exclusive_write", 00:18:20.531 "zoned": false, 00:18:20.531 "supported_io_types": { 00:18:20.531 "read": true, 00:18:20.531 "write": true, 00:18:20.531 "unmap": true, 00:18:20.531 "flush": true, 00:18:20.531 "reset": true, 00:18:20.531 "nvme_admin": false, 00:18:20.531 "nvme_io": false, 00:18:20.531 "nvme_io_md": false, 00:18:20.531 "write_zeroes": true, 00:18:20.531 "zcopy": true, 00:18:20.531 "get_zone_info": false, 00:18:20.531 "zone_management": false, 00:18:20.531 "zone_append": false, 00:18:20.531 "compare": false, 00:18:20.531 "compare_and_write": false, 00:18:20.531 "abort": true, 00:18:20.531 "seek_hole": false, 00:18:20.531 "seek_data": false, 00:18:20.531 "copy": true, 00:18:20.531 "nvme_iov_md": false 00:18:20.531 }, 00:18:20.531 "memory_domains": [ 00:18:20.531 { 00:18:20.532 "dma_device_id": "system", 00:18:20.532 "dma_device_type": 1 00:18:20.532 }, 00:18:20.532 { 00:18:20.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.532 "dma_device_type": 2 00:18:20.532 } 00:18:20.532 ], 00:18:20.532 "driver_specific": {} 00:18:20.532 } 00:18:20.532 ] 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.532 20:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.532 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.532 "name": "Existed_Raid", 00:18:20.532 "uuid": "6c142cc4-587b-4a60-976e-70874eb06f04", 00:18:20.532 "strip_size_kb": 64, 00:18:20.532 "state": "online", 00:18:20.532 "raid_level": "raid5f", 00:18:20.532 "superblock": false, 00:18:20.532 "num_base_bdevs": 3, 00:18:20.532 "num_base_bdevs_discovered": 3, 00:18:20.532 "num_base_bdevs_operational": 3, 00:18:20.532 "base_bdevs_list": [ 00:18:20.532 { 00:18:20.532 "name": "NewBaseBdev", 00:18:20.532 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:20.532 "is_configured": true, 00:18:20.532 "data_offset": 0, 00:18:20.532 "data_size": 65536 00:18:20.532 }, 00:18:20.532 { 00:18:20.532 "name": "BaseBdev2", 00:18:20.532 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:20.532 "is_configured": true, 00:18:20.532 "data_offset": 0, 00:18:20.532 "data_size": 65536 00:18:20.532 }, 00:18:20.532 { 00:18:20.532 "name": "BaseBdev3", 00:18:20.532 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:20.532 "is_configured": true, 00:18:20.532 "data_offset": 0, 00:18:20.532 "data_size": 65536 00:18:20.532 } 00:18:20.532 ] 00:18:20.532 }' 00:18:20.532 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.532 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.099 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:21.099 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:21.099 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.100 [2024-11-26 20:31:14.465790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.100 "name": "Existed_Raid", 00:18:21.100 "aliases": [ 00:18:21.100 "6c142cc4-587b-4a60-976e-70874eb06f04" 00:18:21.100 ], 00:18:21.100 "product_name": "Raid Volume", 00:18:21.100 "block_size": 512, 00:18:21.100 "num_blocks": 131072, 00:18:21.100 "uuid": "6c142cc4-587b-4a60-976e-70874eb06f04", 00:18:21.100 "assigned_rate_limits": { 00:18:21.100 "rw_ios_per_sec": 0, 00:18:21.100 "rw_mbytes_per_sec": 0, 00:18:21.100 "r_mbytes_per_sec": 0, 00:18:21.100 "w_mbytes_per_sec": 0 00:18:21.100 }, 00:18:21.100 "claimed": false, 00:18:21.100 "zoned": false, 00:18:21.100 "supported_io_types": { 00:18:21.100 "read": true, 00:18:21.100 "write": true, 00:18:21.100 "unmap": false, 00:18:21.100 "flush": false, 00:18:21.100 "reset": true, 00:18:21.100 "nvme_admin": false, 00:18:21.100 "nvme_io": false, 00:18:21.100 "nvme_io_md": false, 00:18:21.100 "write_zeroes": true, 00:18:21.100 "zcopy": false, 00:18:21.100 "get_zone_info": false, 00:18:21.100 "zone_management": false, 00:18:21.100 "zone_append": false, 00:18:21.100 "compare": false, 00:18:21.100 "compare_and_write": false, 00:18:21.100 "abort": false, 00:18:21.100 "seek_hole": false, 00:18:21.100 "seek_data": false, 00:18:21.100 "copy": false, 00:18:21.100 "nvme_iov_md": false 00:18:21.100 }, 00:18:21.100 "driver_specific": { 00:18:21.100 "raid": { 00:18:21.100 "uuid": "6c142cc4-587b-4a60-976e-70874eb06f04", 00:18:21.100 "strip_size_kb": 64, 00:18:21.100 "state": "online", 00:18:21.100 "raid_level": "raid5f", 00:18:21.100 "superblock": false, 00:18:21.100 "num_base_bdevs": 3, 00:18:21.100 "num_base_bdevs_discovered": 3, 00:18:21.100 "num_base_bdevs_operational": 3, 00:18:21.100 "base_bdevs_list": [ 00:18:21.100 { 00:18:21.100 "name": "NewBaseBdev", 00:18:21.100 "uuid": "d92d2692-c1c1-47df-b3ed-03293aa7c885", 00:18:21.100 "is_configured": true, 00:18:21.100 "data_offset": 0, 00:18:21.100 "data_size": 65536 00:18:21.100 }, 00:18:21.100 { 00:18:21.100 "name": "BaseBdev2", 00:18:21.100 "uuid": "4d63a608-801f-42be-b5b9-65ee9bf4ba01", 00:18:21.100 "is_configured": true, 00:18:21.100 "data_offset": 0, 00:18:21.100 "data_size": 65536 00:18:21.100 }, 00:18:21.100 { 00:18:21.100 "name": "BaseBdev3", 00:18:21.100 "uuid": "a829a472-152d-41ce-bbbf-374abcfe5e78", 00:18:21.100 "is_configured": true, 00:18:21.100 "data_offset": 0, 00:18:21.100 "data_size": 65536 00:18:21.100 } 00:18:21.100 ] 00:18:21.100 } 00:18:21.100 } 00:18:21.100 }' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:21.100 BaseBdev2 00:18:21.100 BaseBdev3' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.100 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.359 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.359 [2024-11-26 20:31:14.733137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.359 [2024-11-26 20:31:14.733175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.359 [2024-11-26 20:31:14.733285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.359 [2024-11-26 20:31:14.733625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.360 [2024-11-26 20:31:14.733648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80319 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80319 ']' 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80319 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80319 00:18:21.360 killing process with pid 80319 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80319' 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80319 00:18:21.360 [2024-11-26 20:31:14.780938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.360 20:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80319 00:18:21.618 [2024-11-26 20:31:15.127928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:22.996 00:18:22.996 real 0m11.265s 00:18:22.996 user 0m17.795s 00:18:22.996 sys 0m1.994s 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.996 ************************************ 00:18:22.996 END TEST raid5f_state_function_test 00:18:22.996 ************************************ 00:18:22.996 20:31:16 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:18:22.996 20:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:22.996 20:31:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.996 20:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.996 ************************************ 00:18:22.996 START TEST raid5f_state_function_test_sb 00:18:22.996 ************************************ 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.996 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80946 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:22.997 Process raid pid: 80946 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80946' 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80946 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80946 ']' 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.997 20:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.255 [2024-11-26 20:31:16.606001] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:18:23.255 [2024-11-26 20:31:16.606135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.255 [2024-11-26 20:31:16.786332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.513 [2024-11-26 20:31:16.915472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.772 [2024-11-26 20:31:17.152803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.772 [2024-11-26 20:31:17.152859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.031 [2024-11-26 20:31:17.512294] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.031 [2024-11-26 20:31:17.512362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.031 [2024-11-26 20:31:17.512374] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.031 [2024-11-26 20:31:17.512385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.031 [2024-11-26 20:31:17.512398] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.031 [2024-11-26 20:31:17.512409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.031 "name": "Existed_Raid", 00:18:24.031 "uuid": "56efcdc8-0e36-47eb-9d67-d5808c74fe1a", 00:18:24.031 "strip_size_kb": 64, 00:18:24.031 "state": "configuring", 00:18:24.031 "raid_level": "raid5f", 00:18:24.031 "superblock": true, 00:18:24.031 "num_base_bdevs": 3, 00:18:24.031 "num_base_bdevs_discovered": 0, 00:18:24.031 "num_base_bdevs_operational": 3, 00:18:24.031 "base_bdevs_list": [ 00:18:24.031 { 00:18:24.031 "name": "BaseBdev1", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 }, 00:18:24.031 { 00:18:24.031 "name": "BaseBdev2", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 }, 00:18:24.031 { 00:18:24.031 "name": "BaseBdev3", 00:18:24.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.031 "is_configured": false, 00:18:24.031 "data_offset": 0, 00:18:24.031 "data_size": 0 00:18:24.031 } 00:18:24.031 ] 00:18:24.031 }' 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.031 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 [2024-11-26 20:31:17.947464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.598 [2024-11-26 20:31:17.947511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 [2024-11-26 20:31:17.959447] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.598 [2024-11-26 20:31:17.959497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.598 [2024-11-26 20:31:17.959506] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.598 [2024-11-26 20:31:17.959516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.598 [2024-11-26 20:31:17.959523] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.598 [2024-11-26 20:31:17.959532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.598 20:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 [2024-11-26 20:31:18.010036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.598 BaseBdev1 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.598 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 [ 00:18:24.598 { 00:18:24.598 "name": "BaseBdev1", 00:18:24.598 "aliases": [ 00:18:24.598 "dfb43f0f-6998-4aa3-b60c-655e0da1610a" 00:18:24.598 ], 00:18:24.598 "product_name": "Malloc disk", 00:18:24.598 "block_size": 512, 00:18:24.598 "num_blocks": 65536, 00:18:24.598 "uuid": "dfb43f0f-6998-4aa3-b60c-655e0da1610a", 00:18:24.598 "assigned_rate_limits": { 00:18:24.598 "rw_ios_per_sec": 0, 00:18:24.598 "rw_mbytes_per_sec": 0, 00:18:24.598 "r_mbytes_per_sec": 0, 00:18:24.598 "w_mbytes_per_sec": 0 00:18:24.598 }, 00:18:24.598 "claimed": true, 00:18:24.598 "claim_type": "exclusive_write", 00:18:24.598 "zoned": false, 00:18:24.598 "supported_io_types": { 00:18:24.598 "read": true, 00:18:24.598 "write": true, 00:18:24.598 "unmap": true, 00:18:24.598 "flush": true, 00:18:24.598 "reset": true, 00:18:24.598 "nvme_admin": false, 00:18:24.598 "nvme_io": false, 00:18:24.598 "nvme_io_md": false, 00:18:24.598 "write_zeroes": true, 00:18:24.598 "zcopy": true, 00:18:24.598 "get_zone_info": false, 00:18:24.598 "zone_management": false, 00:18:24.598 "zone_append": false, 00:18:24.598 "compare": false, 00:18:24.598 "compare_and_write": false, 00:18:24.598 "abort": true, 00:18:24.599 "seek_hole": false, 00:18:24.599 "seek_data": false, 00:18:24.599 "copy": true, 00:18:24.599 "nvme_iov_md": false 00:18:24.599 }, 00:18:24.599 "memory_domains": [ 00:18:24.599 { 00:18:24.599 "dma_device_id": "system", 00:18:24.599 "dma_device_type": 1 00:18:24.599 }, 00:18:24.599 { 00:18:24.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.599 "dma_device_type": 2 00:18:24.599 } 00:18:24.599 ], 00:18:24.599 "driver_specific": {} 00:18:24.599 } 00:18:24.599 ] 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.599 "name": "Existed_Raid", 00:18:24.599 "uuid": "8a23d57e-9ec9-4651-9fdd-cf8b81268cad", 00:18:24.599 "strip_size_kb": 64, 00:18:24.599 "state": "configuring", 00:18:24.599 "raid_level": "raid5f", 00:18:24.599 "superblock": true, 00:18:24.599 "num_base_bdevs": 3, 00:18:24.599 "num_base_bdevs_discovered": 1, 00:18:24.599 "num_base_bdevs_operational": 3, 00:18:24.599 "base_bdevs_list": [ 00:18:24.599 { 00:18:24.599 "name": "BaseBdev1", 00:18:24.599 "uuid": "dfb43f0f-6998-4aa3-b60c-655e0da1610a", 00:18:24.599 "is_configured": true, 00:18:24.599 "data_offset": 2048, 00:18:24.599 "data_size": 63488 00:18:24.599 }, 00:18:24.599 { 00:18:24.599 "name": "BaseBdev2", 00:18:24.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.599 "is_configured": false, 00:18:24.599 "data_offset": 0, 00:18:24.599 "data_size": 0 00:18:24.599 }, 00:18:24.599 { 00:18:24.599 "name": "BaseBdev3", 00:18:24.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.599 "is_configured": false, 00:18:24.599 "data_offset": 0, 00:18:24.599 "data_size": 0 00:18:24.599 } 00:18:24.599 ] 00:18:24.599 }' 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.599 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.166 [2024-11-26 20:31:18.541237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.166 [2024-11-26 20:31:18.541307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.166 [2024-11-26 20:31:18.553282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.166 [2024-11-26 20:31:18.555241] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.166 [2024-11-26 20:31:18.555295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.166 [2024-11-26 20:31:18.555306] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.166 [2024-11-26 20:31:18.555315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.166 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.167 "name": "Existed_Raid", 00:18:25.167 "uuid": "c29b6994-6aab-4b97-9020-e3c81d2b21ca", 00:18:25.167 "strip_size_kb": 64, 00:18:25.167 "state": "configuring", 00:18:25.167 "raid_level": "raid5f", 00:18:25.167 "superblock": true, 00:18:25.167 "num_base_bdevs": 3, 00:18:25.167 "num_base_bdevs_discovered": 1, 00:18:25.167 "num_base_bdevs_operational": 3, 00:18:25.167 "base_bdevs_list": [ 00:18:25.167 { 00:18:25.167 "name": "BaseBdev1", 00:18:25.167 "uuid": "dfb43f0f-6998-4aa3-b60c-655e0da1610a", 00:18:25.167 "is_configured": true, 00:18:25.167 "data_offset": 2048, 00:18:25.167 "data_size": 63488 00:18:25.167 }, 00:18:25.167 { 00:18:25.167 "name": "BaseBdev2", 00:18:25.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.167 "is_configured": false, 00:18:25.167 "data_offset": 0, 00:18:25.167 "data_size": 0 00:18:25.167 }, 00:18:25.167 { 00:18:25.167 "name": "BaseBdev3", 00:18:25.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.167 "is_configured": false, 00:18:25.167 "data_offset": 0, 00:18:25.167 "data_size": 0 00:18:25.167 } 00:18:25.167 ] 00:18:25.167 }' 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.167 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.426 20:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:25.426 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.426 20:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.686 [2024-11-26 20:31:19.011617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.686 BaseBdev2 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.686 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.687 [ 00:18:25.687 { 00:18:25.687 "name": "BaseBdev2", 00:18:25.687 "aliases": [ 00:18:25.687 "b1e6f316-d17c-4652-a408-d8a1f577b94b" 00:18:25.687 ], 00:18:25.687 "product_name": "Malloc disk", 00:18:25.687 "block_size": 512, 00:18:25.687 "num_blocks": 65536, 00:18:25.687 "uuid": "b1e6f316-d17c-4652-a408-d8a1f577b94b", 00:18:25.687 "assigned_rate_limits": { 00:18:25.687 "rw_ios_per_sec": 0, 00:18:25.687 "rw_mbytes_per_sec": 0, 00:18:25.687 "r_mbytes_per_sec": 0, 00:18:25.687 "w_mbytes_per_sec": 0 00:18:25.687 }, 00:18:25.687 "claimed": true, 00:18:25.687 "claim_type": "exclusive_write", 00:18:25.687 "zoned": false, 00:18:25.687 "supported_io_types": { 00:18:25.687 "read": true, 00:18:25.687 "write": true, 00:18:25.687 "unmap": true, 00:18:25.687 "flush": true, 00:18:25.687 "reset": true, 00:18:25.687 "nvme_admin": false, 00:18:25.687 "nvme_io": false, 00:18:25.687 "nvme_io_md": false, 00:18:25.687 "write_zeroes": true, 00:18:25.687 "zcopy": true, 00:18:25.687 "get_zone_info": false, 00:18:25.687 "zone_management": false, 00:18:25.687 "zone_append": false, 00:18:25.687 "compare": false, 00:18:25.687 "compare_and_write": false, 00:18:25.687 "abort": true, 00:18:25.687 "seek_hole": false, 00:18:25.687 "seek_data": false, 00:18:25.687 "copy": true, 00:18:25.687 "nvme_iov_md": false 00:18:25.687 }, 00:18:25.687 "memory_domains": [ 00:18:25.687 { 00:18:25.687 "dma_device_id": "system", 00:18:25.687 "dma_device_type": 1 00:18:25.687 }, 00:18:25.687 { 00:18:25.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.687 "dma_device_type": 2 00:18:25.687 } 00:18:25.687 ], 00:18:25.687 "driver_specific": {} 00:18:25.687 } 00:18:25.687 ] 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.687 "name": "Existed_Raid", 00:18:25.687 "uuid": "c29b6994-6aab-4b97-9020-e3c81d2b21ca", 00:18:25.687 "strip_size_kb": 64, 00:18:25.687 "state": "configuring", 00:18:25.687 "raid_level": "raid5f", 00:18:25.687 "superblock": true, 00:18:25.687 "num_base_bdevs": 3, 00:18:25.687 "num_base_bdevs_discovered": 2, 00:18:25.687 "num_base_bdevs_operational": 3, 00:18:25.687 "base_bdevs_list": [ 00:18:25.687 { 00:18:25.687 "name": "BaseBdev1", 00:18:25.687 "uuid": "dfb43f0f-6998-4aa3-b60c-655e0da1610a", 00:18:25.687 "is_configured": true, 00:18:25.687 "data_offset": 2048, 00:18:25.687 "data_size": 63488 00:18:25.687 }, 00:18:25.687 { 00:18:25.687 "name": "BaseBdev2", 00:18:25.687 "uuid": "b1e6f316-d17c-4652-a408-d8a1f577b94b", 00:18:25.687 "is_configured": true, 00:18:25.687 "data_offset": 2048, 00:18:25.687 "data_size": 63488 00:18:25.687 }, 00:18:25.687 { 00:18:25.687 "name": "BaseBdev3", 00:18:25.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.687 "is_configured": false, 00:18:25.687 "data_offset": 0, 00:18:25.687 "data_size": 0 00:18:25.687 } 00:18:25.687 ] 00:18:25.687 }' 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.687 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.254 [2024-11-26 20:31:19.573010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.254 [2024-11-26 20:31:19.573355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:26.254 [2024-11-26 20:31:19.573377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:26.254 [2024-11-26 20:31:19.573670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:26.254 BaseBdev3 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:26.254 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.255 [2024-11-26 20:31:19.580100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:26.255 [2024-11-26 20:31:19.580121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:26.255 [2024-11-26 20:31:19.580299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.255 [ 00:18:26.255 { 00:18:26.255 "name": "BaseBdev3", 00:18:26.255 "aliases": [ 00:18:26.255 "55723c87-8b6a-4742-9e0f-ba9e4f6bfabc" 00:18:26.255 ], 00:18:26.255 "product_name": "Malloc disk", 00:18:26.255 "block_size": 512, 00:18:26.255 "num_blocks": 65536, 00:18:26.255 "uuid": "55723c87-8b6a-4742-9e0f-ba9e4f6bfabc", 00:18:26.255 "assigned_rate_limits": { 00:18:26.255 "rw_ios_per_sec": 0, 00:18:26.255 "rw_mbytes_per_sec": 0, 00:18:26.255 "r_mbytes_per_sec": 0, 00:18:26.255 "w_mbytes_per_sec": 0 00:18:26.255 }, 00:18:26.255 "claimed": true, 00:18:26.255 "claim_type": "exclusive_write", 00:18:26.255 "zoned": false, 00:18:26.255 "supported_io_types": { 00:18:26.255 "read": true, 00:18:26.255 "write": true, 00:18:26.255 "unmap": true, 00:18:26.255 "flush": true, 00:18:26.255 "reset": true, 00:18:26.255 "nvme_admin": false, 00:18:26.255 "nvme_io": false, 00:18:26.255 "nvme_io_md": false, 00:18:26.255 "write_zeroes": true, 00:18:26.255 "zcopy": true, 00:18:26.255 "get_zone_info": false, 00:18:26.255 "zone_management": false, 00:18:26.255 "zone_append": false, 00:18:26.255 "compare": false, 00:18:26.255 "compare_and_write": false, 00:18:26.255 "abort": true, 00:18:26.255 "seek_hole": false, 00:18:26.255 "seek_data": false, 00:18:26.255 "copy": true, 00:18:26.255 "nvme_iov_md": false 00:18:26.255 }, 00:18:26.255 "memory_domains": [ 00:18:26.255 { 00:18:26.255 "dma_device_id": "system", 00:18:26.255 "dma_device_type": 1 00:18:26.255 }, 00:18:26.255 { 00:18:26.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.255 "dma_device_type": 2 00:18:26.255 } 00:18:26.255 ], 00:18:26.255 "driver_specific": {} 00:18:26.255 } 00:18:26.255 ] 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.255 "name": "Existed_Raid", 00:18:26.255 "uuid": "c29b6994-6aab-4b97-9020-e3c81d2b21ca", 00:18:26.255 "strip_size_kb": 64, 00:18:26.255 "state": "online", 00:18:26.255 "raid_level": "raid5f", 00:18:26.255 "superblock": true, 00:18:26.255 "num_base_bdevs": 3, 00:18:26.255 "num_base_bdevs_discovered": 3, 00:18:26.255 "num_base_bdevs_operational": 3, 00:18:26.255 "base_bdevs_list": [ 00:18:26.255 { 00:18:26.255 "name": "BaseBdev1", 00:18:26.255 "uuid": "dfb43f0f-6998-4aa3-b60c-655e0da1610a", 00:18:26.255 "is_configured": true, 00:18:26.255 "data_offset": 2048, 00:18:26.255 "data_size": 63488 00:18:26.255 }, 00:18:26.255 { 00:18:26.255 "name": "BaseBdev2", 00:18:26.255 "uuid": "b1e6f316-d17c-4652-a408-d8a1f577b94b", 00:18:26.255 "is_configured": true, 00:18:26.255 "data_offset": 2048, 00:18:26.255 "data_size": 63488 00:18:26.255 }, 00:18:26.255 { 00:18:26.255 "name": "BaseBdev3", 00:18:26.255 "uuid": "55723c87-8b6a-4742-9e0f-ba9e4f6bfabc", 00:18:26.255 "is_configured": true, 00:18:26.255 "data_offset": 2048, 00:18:26.255 "data_size": 63488 00:18:26.255 } 00:18:26.255 ] 00:18:26.255 }' 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.255 20:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.825 [2024-11-26 20:31:20.106196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.825 "name": "Existed_Raid", 00:18:26.825 "aliases": [ 00:18:26.825 "c29b6994-6aab-4b97-9020-e3c81d2b21ca" 00:18:26.825 ], 00:18:26.825 "product_name": "Raid Volume", 00:18:26.825 "block_size": 512, 00:18:26.825 "num_blocks": 126976, 00:18:26.825 "uuid": "c29b6994-6aab-4b97-9020-e3c81d2b21ca", 00:18:26.825 "assigned_rate_limits": { 00:18:26.825 "rw_ios_per_sec": 0, 00:18:26.825 "rw_mbytes_per_sec": 0, 00:18:26.825 "r_mbytes_per_sec": 0, 00:18:26.825 "w_mbytes_per_sec": 0 00:18:26.825 }, 00:18:26.825 "claimed": false, 00:18:26.825 "zoned": false, 00:18:26.825 "supported_io_types": { 00:18:26.825 "read": true, 00:18:26.825 "write": true, 00:18:26.825 "unmap": false, 00:18:26.825 "flush": false, 00:18:26.825 "reset": true, 00:18:26.825 "nvme_admin": false, 00:18:26.825 "nvme_io": false, 00:18:26.825 "nvme_io_md": false, 00:18:26.825 "write_zeroes": true, 00:18:26.825 "zcopy": false, 00:18:26.825 "get_zone_info": false, 00:18:26.825 "zone_management": false, 00:18:26.825 "zone_append": false, 00:18:26.825 "compare": false, 00:18:26.825 "compare_and_write": false, 00:18:26.825 "abort": false, 00:18:26.825 "seek_hole": false, 00:18:26.825 "seek_data": false, 00:18:26.825 "copy": false, 00:18:26.825 "nvme_iov_md": false 00:18:26.825 }, 00:18:26.825 "driver_specific": { 00:18:26.825 "raid": { 00:18:26.825 "uuid": "c29b6994-6aab-4b97-9020-e3c81d2b21ca", 00:18:26.825 "strip_size_kb": 64, 00:18:26.825 "state": "online", 00:18:26.825 "raid_level": "raid5f", 00:18:26.825 "superblock": true, 00:18:26.825 "num_base_bdevs": 3, 00:18:26.825 "num_base_bdevs_discovered": 3, 00:18:26.825 "num_base_bdevs_operational": 3, 00:18:26.825 "base_bdevs_list": [ 00:18:26.825 { 00:18:26.825 "name": "BaseBdev1", 00:18:26.825 "uuid": "dfb43f0f-6998-4aa3-b60c-655e0da1610a", 00:18:26.825 "is_configured": true, 00:18:26.825 "data_offset": 2048, 00:18:26.825 "data_size": 63488 00:18:26.825 }, 00:18:26.825 { 00:18:26.825 "name": "BaseBdev2", 00:18:26.825 "uuid": "b1e6f316-d17c-4652-a408-d8a1f577b94b", 00:18:26.825 "is_configured": true, 00:18:26.825 "data_offset": 2048, 00:18:26.825 "data_size": 63488 00:18:26.825 }, 00:18:26.825 { 00:18:26.825 "name": "BaseBdev3", 00:18:26.825 "uuid": "55723c87-8b6a-4742-9e0f-ba9e4f6bfabc", 00:18:26.825 "is_configured": true, 00:18:26.825 "data_offset": 2048, 00:18:26.825 "data_size": 63488 00:18:26.825 } 00:18:26.825 ] 00:18:26.825 } 00:18:26.825 } 00:18:26.825 }' 00:18:26.825 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:26.826 BaseBdev2 00:18:26.826 BaseBdev3' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.826 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.826 [2024-11-26 20:31:20.357622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.085 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.086 "name": "Existed_Raid", 00:18:27.086 "uuid": "c29b6994-6aab-4b97-9020-e3c81d2b21ca", 00:18:27.086 "strip_size_kb": 64, 00:18:27.086 "state": "online", 00:18:27.086 "raid_level": "raid5f", 00:18:27.086 "superblock": true, 00:18:27.086 "num_base_bdevs": 3, 00:18:27.086 "num_base_bdevs_discovered": 2, 00:18:27.086 "num_base_bdevs_operational": 2, 00:18:27.086 "base_bdevs_list": [ 00:18:27.086 { 00:18:27.086 "name": null, 00:18:27.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.086 "is_configured": false, 00:18:27.086 "data_offset": 0, 00:18:27.086 "data_size": 63488 00:18:27.086 }, 00:18:27.086 { 00:18:27.086 "name": "BaseBdev2", 00:18:27.086 "uuid": "b1e6f316-d17c-4652-a408-d8a1f577b94b", 00:18:27.086 "is_configured": true, 00:18:27.086 "data_offset": 2048, 00:18:27.086 "data_size": 63488 00:18:27.086 }, 00:18:27.086 { 00:18:27.086 "name": "BaseBdev3", 00:18:27.086 "uuid": "55723c87-8b6a-4742-9e0f-ba9e4f6bfabc", 00:18:27.086 "is_configured": true, 00:18:27.086 "data_offset": 2048, 00:18:27.086 "data_size": 63488 00:18:27.086 } 00:18:27.086 ] 00:18:27.086 }' 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.086 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.346 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.346 [2024-11-26 20:31:20.892757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.346 [2024-11-26 20:31:20.892935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.606 [2024-11-26 20:31:20.994781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.606 20:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.606 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:27.606 20:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.606 [2024-11-26 20:31:21.054711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:27.606 [2024-11-26 20:31:21.054774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:27.606 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.866 BaseBdev2 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:27.866 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 [ 00:18:27.867 { 00:18:27.867 "name": "BaseBdev2", 00:18:27.867 "aliases": [ 00:18:27.867 "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0" 00:18:27.867 ], 00:18:27.867 "product_name": "Malloc disk", 00:18:27.867 "block_size": 512, 00:18:27.867 "num_blocks": 65536, 00:18:27.867 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:27.867 "assigned_rate_limits": { 00:18:27.867 "rw_ios_per_sec": 0, 00:18:27.867 "rw_mbytes_per_sec": 0, 00:18:27.867 "r_mbytes_per_sec": 0, 00:18:27.867 "w_mbytes_per_sec": 0 00:18:27.867 }, 00:18:27.867 "claimed": false, 00:18:27.867 "zoned": false, 00:18:27.867 "supported_io_types": { 00:18:27.867 "read": true, 00:18:27.867 "write": true, 00:18:27.867 "unmap": true, 00:18:27.867 "flush": true, 00:18:27.867 "reset": true, 00:18:27.867 "nvme_admin": false, 00:18:27.867 "nvme_io": false, 00:18:27.867 "nvme_io_md": false, 00:18:27.867 "write_zeroes": true, 00:18:27.867 "zcopy": true, 00:18:27.867 "get_zone_info": false, 00:18:27.867 "zone_management": false, 00:18:27.867 "zone_append": false, 00:18:27.867 "compare": false, 00:18:27.867 "compare_and_write": false, 00:18:27.867 "abort": true, 00:18:27.867 "seek_hole": false, 00:18:27.867 "seek_data": false, 00:18:27.867 "copy": true, 00:18:27.867 "nvme_iov_md": false 00:18:27.867 }, 00:18:27.867 "memory_domains": [ 00:18:27.867 { 00:18:27.867 "dma_device_id": "system", 00:18:27.867 "dma_device_type": 1 00:18:27.867 }, 00:18:27.867 { 00:18:27.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.867 "dma_device_type": 2 00:18:27.867 } 00:18:27.867 ], 00:18:27.867 "driver_specific": {} 00:18:27.867 } 00:18:27.867 ] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 BaseBdev3 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 [ 00:18:27.867 { 00:18:27.867 "name": "BaseBdev3", 00:18:27.867 "aliases": [ 00:18:27.867 "c2848c26-b222-4bd3-b2ce-0b65cb723432" 00:18:27.867 ], 00:18:27.867 "product_name": "Malloc disk", 00:18:27.867 "block_size": 512, 00:18:27.867 "num_blocks": 65536, 00:18:27.867 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:27.867 "assigned_rate_limits": { 00:18:27.867 "rw_ios_per_sec": 0, 00:18:27.867 "rw_mbytes_per_sec": 0, 00:18:27.867 "r_mbytes_per_sec": 0, 00:18:27.867 "w_mbytes_per_sec": 0 00:18:27.867 }, 00:18:27.867 "claimed": false, 00:18:27.867 "zoned": false, 00:18:27.867 "supported_io_types": { 00:18:27.867 "read": true, 00:18:27.867 "write": true, 00:18:27.867 "unmap": true, 00:18:27.867 "flush": true, 00:18:27.867 "reset": true, 00:18:27.867 "nvme_admin": false, 00:18:27.867 "nvme_io": false, 00:18:27.867 "nvme_io_md": false, 00:18:27.867 "write_zeroes": true, 00:18:27.867 "zcopy": true, 00:18:27.867 "get_zone_info": false, 00:18:27.867 "zone_management": false, 00:18:27.867 "zone_append": false, 00:18:27.867 "compare": false, 00:18:27.867 "compare_and_write": false, 00:18:27.867 "abort": true, 00:18:27.867 "seek_hole": false, 00:18:27.867 "seek_data": false, 00:18:27.867 "copy": true, 00:18:27.867 "nvme_iov_md": false 00:18:27.867 }, 00:18:27.867 "memory_domains": [ 00:18:27.867 { 00:18:27.867 "dma_device_id": "system", 00:18:27.867 "dma_device_type": 1 00:18:27.867 }, 00:18:27.867 { 00:18:27.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.867 "dma_device_type": 2 00:18:27.867 } 00:18:27.867 ], 00:18:27.867 "driver_specific": {} 00:18:27.867 } 00:18:27.867 ] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 [2024-11-26 20:31:21.386926] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.867 [2024-11-26 20:31:21.387039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.867 [2024-11-26 20:31:21.387076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.867 [2024-11-26 20:31:21.389335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.867 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.127 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.127 "name": "Existed_Raid", 00:18:28.127 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:28.127 "strip_size_kb": 64, 00:18:28.127 "state": "configuring", 00:18:28.127 "raid_level": "raid5f", 00:18:28.127 "superblock": true, 00:18:28.127 "num_base_bdevs": 3, 00:18:28.127 "num_base_bdevs_discovered": 2, 00:18:28.127 "num_base_bdevs_operational": 3, 00:18:28.127 "base_bdevs_list": [ 00:18:28.127 { 00:18:28.127 "name": "BaseBdev1", 00:18:28.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.127 "is_configured": false, 00:18:28.127 "data_offset": 0, 00:18:28.127 "data_size": 0 00:18:28.127 }, 00:18:28.127 { 00:18:28.127 "name": "BaseBdev2", 00:18:28.127 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:28.127 "is_configured": true, 00:18:28.127 "data_offset": 2048, 00:18:28.127 "data_size": 63488 00:18:28.127 }, 00:18:28.127 { 00:18:28.127 "name": "BaseBdev3", 00:18:28.127 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:28.127 "is_configured": true, 00:18:28.127 "data_offset": 2048, 00:18:28.127 "data_size": 63488 00:18:28.127 } 00:18:28.127 ] 00:18:28.127 }' 00:18:28.127 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.127 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.386 [2024-11-26 20:31:21.894098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.386 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.387 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.646 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.646 "name": "Existed_Raid", 00:18:28.646 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:28.646 "strip_size_kb": 64, 00:18:28.646 "state": "configuring", 00:18:28.646 "raid_level": "raid5f", 00:18:28.646 "superblock": true, 00:18:28.646 "num_base_bdevs": 3, 00:18:28.646 "num_base_bdevs_discovered": 1, 00:18:28.646 "num_base_bdevs_operational": 3, 00:18:28.646 "base_bdevs_list": [ 00:18:28.646 { 00:18:28.646 "name": "BaseBdev1", 00:18:28.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.646 "is_configured": false, 00:18:28.646 "data_offset": 0, 00:18:28.646 "data_size": 0 00:18:28.646 }, 00:18:28.646 { 00:18:28.646 "name": null, 00:18:28.646 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:28.646 "is_configured": false, 00:18:28.646 "data_offset": 0, 00:18:28.646 "data_size": 63488 00:18:28.646 }, 00:18:28.646 { 00:18:28.646 "name": "BaseBdev3", 00:18:28.646 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:28.646 "is_configured": true, 00:18:28.646 "data_offset": 2048, 00:18:28.646 "data_size": 63488 00:18:28.646 } 00:18:28.646 ] 00:18:28.646 }' 00:18:28.646 20:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.646 20:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.905 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.905 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.905 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.905 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:28.905 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.906 [2024-11-26 20:31:22.443798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.906 BaseBdev1 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.906 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.166 [ 00:18:29.166 { 00:18:29.166 "name": "BaseBdev1", 00:18:29.166 "aliases": [ 00:18:29.166 "bea70bf5-0132-4232-aa02-63fd667ee724" 00:18:29.166 ], 00:18:29.166 "product_name": "Malloc disk", 00:18:29.166 "block_size": 512, 00:18:29.166 "num_blocks": 65536, 00:18:29.166 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:29.166 "assigned_rate_limits": { 00:18:29.166 "rw_ios_per_sec": 0, 00:18:29.166 "rw_mbytes_per_sec": 0, 00:18:29.166 "r_mbytes_per_sec": 0, 00:18:29.166 "w_mbytes_per_sec": 0 00:18:29.166 }, 00:18:29.166 "claimed": true, 00:18:29.166 "claim_type": "exclusive_write", 00:18:29.166 "zoned": false, 00:18:29.166 "supported_io_types": { 00:18:29.166 "read": true, 00:18:29.166 "write": true, 00:18:29.166 "unmap": true, 00:18:29.166 "flush": true, 00:18:29.166 "reset": true, 00:18:29.166 "nvme_admin": false, 00:18:29.166 "nvme_io": false, 00:18:29.166 "nvme_io_md": false, 00:18:29.166 "write_zeroes": true, 00:18:29.166 "zcopy": true, 00:18:29.166 "get_zone_info": false, 00:18:29.166 "zone_management": false, 00:18:29.166 "zone_append": false, 00:18:29.166 "compare": false, 00:18:29.166 "compare_and_write": false, 00:18:29.166 "abort": true, 00:18:29.166 "seek_hole": false, 00:18:29.166 "seek_data": false, 00:18:29.166 "copy": true, 00:18:29.166 "nvme_iov_md": false 00:18:29.166 }, 00:18:29.166 "memory_domains": [ 00:18:29.166 { 00:18:29.166 "dma_device_id": "system", 00:18:29.166 "dma_device_type": 1 00:18:29.166 }, 00:18:29.166 { 00:18:29.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.166 "dma_device_type": 2 00:18:29.166 } 00:18:29.166 ], 00:18:29.166 "driver_specific": {} 00:18:29.166 } 00:18:29.166 ] 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.166 "name": "Existed_Raid", 00:18:29.166 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:29.166 "strip_size_kb": 64, 00:18:29.166 "state": "configuring", 00:18:29.166 "raid_level": "raid5f", 00:18:29.166 "superblock": true, 00:18:29.166 "num_base_bdevs": 3, 00:18:29.166 "num_base_bdevs_discovered": 2, 00:18:29.166 "num_base_bdevs_operational": 3, 00:18:29.166 "base_bdevs_list": [ 00:18:29.166 { 00:18:29.166 "name": "BaseBdev1", 00:18:29.166 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:29.166 "is_configured": true, 00:18:29.166 "data_offset": 2048, 00:18:29.166 "data_size": 63488 00:18:29.166 }, 00:18:29.166 { 00:18:29.166 "name": null, 00:18:29.166 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:29.166 "is_configured": false, 00:18:29.166 "data_offset": 0, 00:18:29.166 "data_size": 63488 00:18:29.166 }, 00:18:29.166 { 00:18:29.166 "name": "BaseBdev3", 00:18:29.166 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:29.166 "is_configured": true, 00:18:29.166 "data_offset": 2048, 00:18:29.166 "data_size": 63488 00:18:29.166 } 00:18:29.166 ] 00:18:29.166 }' 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.166 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 [2024-11-26 20:31:22.943089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.425 20:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.684 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.684 "name": "Existed_Raid", 00:18:29.684 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:29.684 "strip_size_kb": 64, 00:18:29.684 "state": "configuring", 00:18:29.684 "raid_level": "raid5f", 00:18:29.684 "superblock": true, 00:18:29.684 "num_base_bdevs": 3, 00:18:29.684 "num_base_bdevs_discovered": 1, 00:18:29.684 "num_base_bdevs_operational": 3, 00:18:29.684 "base_bdevs_list": [ 00:18:29.684 { 00:18:29.685 "name": "BaseBdev1", 00:18:29.685 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:29.685 "is_configured": true, 00:18:29.685 "data_offset": 2048, 00:18:29.685 "data_size": 63488 00:18:29.685 }, 00:18:29.685 { 00:18:29.685 "name": null, 00:18:29.685 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:29.685 "is_configured": false, 00:18:29.685 "data_offset": 0, 00:18:29.685 "data_size": 63488 00:18:29.685 }, 00:18:29.685 { 00:18:29.685 "name": null, 00:18:29.685 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:29.685 "is_configured": false, 00:18:29.685 "data_offset": 0, 00:18:29.685 "data_size": 63488 00:18:29.685 } 00:18:29.685 ] 00:18:29.685 }' 00:18:29.685 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.685 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.981 [2024-11-26 20:31:23.502187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.981 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.240 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.240 "name": "Existed_Raid", 00:18:30.240 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:30.240 "strip_size_kb": 64, 00:18:30.240 "state": "configuring", 00:18:30.240 "raid_level": "raid5f", 00:18:30.240 "superblock": true, 00:18:30.240 "num_base_bdevs": 3, 00:18:30.240 "num_base_bdevs_discovered": 2, 00:18:30.240 "num_base_bdevs_operational": 3, 00:18:30.240 "base_bdevs_list": [ 00:18:30.240 { 00:18:30.240 "name": "BaseBdev1", 00:18:30.240 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:30.240 "is_configured": true, 00:18:30.240 "data_offset": 2048, 00:18:30.240 "data_size": 63488 00:18:30.240 }, 00:18:30.241 { 00:18:30.241 "name": null, 00:18:30.241 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:30.241 "is_configured": false, 00:18:30.241 "data_offset": 0, 00:18:30.241 "data_size": 63488 00:18:30.241 }, 00:18:30.241 { 00:18:30.241 "name": "BaseBdev3", 00:18:30.241 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:30.241 "is_configured": true, 00:18:30.241 "data_offset": 2048, 00:18:30.241 "data_size": 63488 00:18:30.241 } 00:18:30.241 ] 00:18:30.241 }' 00:18:30.241 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.241 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.500 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.500 20:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:30.500 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.500 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.500 20:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.500 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:30.500 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:30.500 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.500 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.500 [2024-11-26 20:31:24.033344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.759 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.760 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.760 "name": "Existed_Raid", 00:18:30.760 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:30.760 "strip_size_kb": 64, 00:18:30.760 "state": "configuring", 00:18:30.760 "raid_level": "raid5f", 00:18:30.760 "superblock": true, 00:18:30.760 "num_base_bdevs": 3, 00:18:30.760 "num_base_bdevs_discovered": 1, 00:18:30.760 "num_base_bdevs_operational": 3, 00:18:30.760 "base_bdevs_list": [ 00:18:30.760 { 00:18:30.760 "name": null, 00:18:30.760 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:30.760 "is_configured": false, 00:18:30.760 "data_offset": 0, 00:18:30.760 "data_size": 63488 00:18:30.760 }, 00:18:30.760 { 00:18:30.760 "name": null, 00:18:30.760 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:30.760 "is_configured": false, 00:18:30.760 "data_offset": 0, 00:18:30.760 "data_size": 63488 00:18:30.760 }, 00:18:30.760 { 00:18:30.760 "name": "BaseBdev3", 00:18:30.760 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:30.760 "is_configured": true, 00:18:30.760 "data_offset": 2048, 00:18:30.760 "data_size": 63488 00:18:30.760 } 00:18:30.760 ] 00:18:30.760 }' 00:18:30.760 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.760 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.328 [2024-11-26 20:31:24.639112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.328 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.329 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.329 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.329 "name": "Existed_Raid", 00:18:31.329 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:31.329 "strip_size_kb": 64, 00:18:31.329 "state": "configuring", 00:18:31.329 "raid_level": "raid5f", 00:18:31.329 "superblock": true, 00:18:31.329 "num_base_bdevs": 3, 00:18:31.329 "num_base_bdevs_discovered": 2, 00:18:31.329 "num_base_bdevs_operational": 3, 00:18:31.329 "base_bdevs_list": [ 00:18:31.329 { 00:18:31.329 "name": null, 00:18:31.329 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:31.329 "is_configured": false, 00:18:31.329 "data_offset": 0, 00:18:31.329 "data_size": 63488 00:18:31.329 }, 00:18:31.329 { 00:18:31.329 "name": "BaseBdev2", 00:18:31.329 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:31.329 "is_configured": true, 00:18:31.329 "data_offset": 2048, 00:18:31.329 "data_size": 63488 00:18:31.329 }, 00:18:31.329 { 00:18:31.329 "name": "BaseBdev3", 00:18:31.329 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:31.329 "is_configured": true, 00:18:31.329 "data_offset": 2048, 00:18:31.329 "data_size": 63488 00:18:31.329 } 00:18:31.329 ] 00:18:31.329 }' 00:18:31.329 20:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.329 20:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.588 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.588 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:31.588 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.588 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bea70bf5-0132-4232-aa02-63fd667ee724 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 [2024-11-26 20:31:25.264591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:31.848 [2024-11-26 20:31:25.264942] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:31.848 [2024-11-26 20:31:25.265001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:31.848 [2024-11-26 20:31:25.265307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:31.848 NewBaseBdev 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 [2024-11-26 20:31:25.271780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:31.848 [2024-11-26 20:31:25.271838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:31.848 [2024-11-26 20:31:25.272062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 [ 00:18:31.848 { 00:18:31.848 "name": "NewBaseBdev", 00:18:31.848 "aliases": [ 00:18:31.848 "bea70bf5-0132-4232-aa02-63fd667ee724" 00:18:31.848 ], 00:18:31.848 "product_name": "Malloc disk", 00:18:31.848 "block_size": 512, 00:18:31.848 "num_blocks": 65536, 00:18:31.848 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:31.848 "assigned_rate_limits": { 00:18:31.848 "rw_ios_per_sec": 0, 00:18:31.848 "rw_mbytes_per_sec": 0, 00:18:31.848 "r_mbytes_per_sec": 0, 00:18:31.848 "w_mbytes_per_sec": 0 00:18:31.848 }, 00:18:31.848 "claimed": true, 00:18:31.848 "claim_type": "exclusive_write", 00:18:31.848 "zoned": false, 00:18:31.848 "supported_io_types": { 00:18:31.848 "read": true, 00:18:31.848 "write": true, 00:18:31.848 "unmap": true, 00:18:31.848 "flush": true, 00:18:31.848 "reset": true, 00:18:31.848 "nvme_admin": false, 00:18:31.848 "nvme_io": false, 00:18:31.848 "nvme_io_md": false, 00:18:31.848 "write_zeroes": true, 00:18:31.848 "zcopy": true, 00:18:31.848 "get_zone_info": false, 00:18:31.848 "zone_management": false, 00:18:31.848 "zone_append": false, 00:18:31.848 "compare": false, 00:18:31.848 "compare_and_write": false, 00:18:31.848 "abort": true, 00:18:31.848 "seek_hole": false, 00:18:31.848 "seek_data": false, 00:18:31.848 "copy": true, 00:18:31.848 "nvme_iov_md": false 00:18:31.848 }, 00:18:31.848 "memory_domains": [ 00:18:31.848 { 00:18:31.848 "dma_device_id": "system", 00:18:31.848 "dma_device_type": 1 00:18:31.848 }, 00:18:31.848 { 00:18:31.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.848 "dma_device_type": 2 00:18:31.848 } 00:18:31.848 ], 00:18:31.848 "driver_specific": {} 00:18:31.848 } 00:18:31.848 ] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.848 "name": "Existed_Raid", 00:18:31.848 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:31.848 "strip_size_kb": 64, 00:18:31.848 "state": "online", 00:18:31.848 "raid_level": "raid5f", 00:18:31.848 "superblock": true, 00:18:31.848 "num_base_bdevs": 3, 00:18:31.848 "num_base_bdevs_discovered": 3, 00:18:31.848 "num_base_bdevs_operational": 3, 00:18:31.848 "base_bdevs_list": [ 00:18:31.848 { 00:18:31.848 "name": "NewBaseBdev", 00:18:31.848 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:31.848 "is_configured": true, 00:18:31.848 "data_offset": 2048, 00:18:31.848 "data_size": 63488 00:18:31.848 }, 00:18:31.848 { 00:18:31.848 "name": "BaseBdev2", 00:18:31.848 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:31.848 "is_configured": true, 00:18:31.848 "data_offset": 2048, 00:18:31.848 "data_size": 63488 00:18:31.848 }, 00:18:31.848 { 00:18:31.848 "name": "BaseBdev3", 00:18:31.848 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:31.848 "is_configured": true, 00:18:31.848 "data_offset": 2048, 00:18:31.848 "data_size": 63488 00:18:31.848 } 00:18:31.848 ] 00:18:31.848 }' 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.848 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.416 [2024-11-26 20:31:25.770947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.416 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.416 "name": "Existed_Raid", 00:18:32.416 "aliases": [ 00:18:32.416 "849eb652-6baf-495b-b90e-3c9e206b128a" 00:18:32.416 ], 00:18:32.416 "product_name": "Raid Volume", 00:18:32.416 "block_size": 512, 00:18:32.416 "num_blocks": 126976, 00:18:32.416 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:32.416 "assigned_rate_limits": { 00:18:32.416 "rw_ios_per_sec": 0, 00:18:32.416 "rw_mbytes_per_sec": 0, 00:18:32.416 "r_mbytes_per_sec": 0, 00:18:32.416 "w_mbytes_per_sec": 0 00:18:32.416 }, 00:18:32.416 "claimed": false, 00:18:32.416 "zoned": false, 00:18:32.416 "supported_io_types": { 00:18:32.416 "read": true, 00:18:32.416 "write": true, 00:18:32.416 "unmap": false, 00:18:32.416 "flush": false, 00:18:32.416 "reset": true, 00:18:32.416 "nvme_admin": false, 00:18:32.416 "nvme_io": false, 00:18:32.416 "nvme_io_md": false, 00:18:32.416 "write_zeroes": true, 00:18:32.416 "zcopy": false, 00:18:32.416 "get_zone_info": false, 00:18:32.416 "zone_management": false, 00:18:32.416 "zone_append": false, 00:18:32.416 "compare": false, 00:18:32.416 "compare_and_write": false, 00:18:32.416 "abort": false, 00:18:32.416 "seek_hole": false, 00:18:32.416 "seek_data": false, 00:18:32.416 "copy": false, 00:18:32.416 "nvme_iov_md": false 00:18:32.416 }, 00:18:32.416 "driver_specific": { 00:18:32.416 "raid": { 00:18:32.416 "uuid": "849eb652-6baf-495b-b90e-3c9e206b128a", 00:18:32.416 "strip_size_kb": 64, 00:18:32.416 "state": "online", 00:18:32.416 "raid_level": "raid5f", 00:18:32.416 "superblock": true, 00:18:32.416 "num_base_bdevs": 3, 00:18:32.416 "num_base_bdevs_discovered": 3, 00:18:32.416 "num_base_bdevs_operational": 3, 00:18:32.416 "base_bdevs_list": [ 00:18:32.416 { 00:18:32.416 "name": "NewBaseBdev", 00:18:32.416 "uuid": "bea70bf5-0132-4232-aa02-63fd667ee724", 00:18:32.416 "is_configured": true, 00:18:32.416 "data_offset": 2048, 00:18:32.416 "data_size": 63488 00:18:32.416 }, 00:18:32.416 { 00:18:32.417 "name": "BaseBdev2", 00:18:32.417 "uuid": "33ef547e-2b8b-4fbc-93c5-2fcdc2d98bb0", 00:18:32.417 "is_configured": true, 00:18:32.417 "data_offset": 2048, 00:18:32.417 "data_size": 63488 00:18:32.417 }, 00:18:32.417 { 00:18:32.417 "name": "BaseBdev3", 00:18:32.417 "uuid": "c2848c26-b222-4bd3-b2ce-0b65cb723432", 00:18:32.417 "is_configured": true, 00:18:32.417 "data_offset": 2048, 00:18:32.417 "data_size": 63488 00:18:32.417 } 00:18:32.417 ] 00:18:32.417 } 00:18:32.417 } 00:18:32.417 }' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:32.417 BaseBdev2 00:18:32.417 BaseBdev3' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.417 20:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.677 20:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.677 [2024-11-26 20:31:26.070204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.677 [2024-11-26 20:31:26.070235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.677 [2024-11-26 20:31:26.070354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.677 [2024-11-26 20:31:26.070665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.677 [2024-11-26 20:31:26.070686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80946 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80946 ']' 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80946 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80946 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80946' 00:18:32.677 killing process with pid 80946 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80946 00:18:32.677 [2024-11-26 20:31:26.118739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.677 20:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80946 00:18:32.936 [2024-11-26 20:31:26.440377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.316 20:31:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:34.316 00:18:34.316 real 0m11.111s 00:18:34.316 user 0m17.661s 00:18:34.316 sys 0m1.976s 00:18:34.316 20:31:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.316 20:31:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.316 ************************************ 00:18:34.316 END TEST raid5f_state_function_test_sb 00:18:34.316 ************************************ 00:18:34.316 20:31:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:34.316 20:31:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:34.316 20:31:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.316 20:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.316 ************************************ 00:18:34.316 START TEST raid5f_superblock_test 00:18:34.316 ************************************ 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81572 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81572 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81572 ']' 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.316 20:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.316 [2024-11-26 20:31:27.772069] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:18:34.316 [2024-11-26 20:31:27.772301] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81572 ] 00:18:34.573 [2024-11-26 20:31:27.949803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.573 [2024-11-26 20:31:28.073617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.831 [2024-11-26 20:31:28.279534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.831 [2024-11-26 20:31:28.279678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.432 malloc1 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.432 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 [2024-11-26 20:31:28.766872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.433 [2024-11-26 20:31:28.767015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.433 [2024-11-26 20:31:28.767048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.433 [2024-11-26 20:31:28.767060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.433 [2024-11-26 20:31:28.769635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.433 [2024-11-26 20:31:28.769677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.433 pt1 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 malloc2 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 [2024-11-26 20:31:28.824812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.433 [2024-11-26 20:31:28.824961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.433 [2024-11-26 20:31:28.825052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.433 [2024-11-26 20:31:28.825109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.433 [2024-11-26 20:31:28.827670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.433 [2024-11-26 20:31:28.827762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.433 pt2 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 malloc3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 [2024-11-26 20:31:28.898274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:35.433 [2024-11-26 20:31:28.898428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.433 [2024-11-26 20:31:28.898499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:35.433 [2024-11-26 20:31:28.898566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.433 [2024-11-26 20:31:28.901363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.433 [2024-11-26 20:31:28.901462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:35.433 pt3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 [2024-11-26 20:31:28.910442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.433 [2024-11-26 20:31:28.912593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.433 [2024-11-26 20:31:28.912685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:35.433 [2024-11-26 20:31:28.912914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:35.433 [2024-11-26 20:31:28.912940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:35.433 [2024-11-26 20:31:28.913261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:35.433 [2024-11-26 20:31:28.920170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:35.433 [2024-11-26 20:31:28.920274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:35.433 [2024-11-26 20:31:28.920612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.433 "name": "raid_bdev1", 00:18:35.433 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:35.433 "strip_size_kb": 64, 00:18:35.433 "state": "online", 00:18:35.433 "raid_level": "raid5f", 00:18:35.433 "superblock": true, 00:18:35.433 "num_base_bdevs": 3, 00:18:35.433 "num_base_bdevs_discovered": 3, 00:18:35.433 "num_base_bdevs_operational": 3, 00:18:35.433 "base_bdevs_list": [ 00:18:35.433 { 00:18:35.433 "name": "pt1", 00:18:35.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.433 "is_configured": true, 00:18:35.433 "data_offset": 2048, 00:18:35.433 "data_size": 63488 00:18:35.433 }, 00:18:35.433 { 00:18:35.433 "name": "pt2", 00:18:35.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.433 "is_configured": true, 00:18:35.433 "data_offset": 2048, 00:18:35.433 "data_size": 63488 00:18:35.433 }, 00:18:35.433 { 00:18:35.433 "name": "pt3", 00:18:35.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.433 "is_configured": true, 00:18:35.433 "data_offset": 2048, 00:18:35.433 "data_size": 63488 00:18:35.433 } 00:18:35.433 ] 00:18:35.433 }' 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.433 20:31:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.015 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.016 [2024-11-26 20:31:29.399940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.016 "name": "raid_bdev1", 00:18:36.016 "aliases": [ 00:18:36.016 "c15de6bd-70c7-420b-ab3a-b44cf83a1cef" 00:18:36.016 ], 00:18:36.016 "product_name": "Raid Volume", 00:18:36.016 "block_size": 512, 00:18:36.016 "num_blocks": 126976, 00:18:36.016 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:36.016 "assigned_rate_limits": { 00:18:36.016 "rw_ios_per_sec": 0, 00:18:36.016 "rw_mbytes_per_sec": 0, 00:18:36.016 "r_mbytes_per_sec": 0, 00:18:36.016 "w_mbytes_per_sec": 0 00:18:36.016 }, 00:18:36.016 "claimed": false, 00:18:36.016 "zoned": false, 00:18:36.016 "supported_io_types": { 00:18:36.016 "read": true, 00:18:36.016 "write": true, 00:18:36.016 "unmap": false, 00:18:36.016 "flush": false, 00:18:36.016 "reset": true, 00:18:36.016 "nvme_admin": false, 00:18:36.016 "nvme_io": false, 00:18:36.016 "nvme_io_md": false, 00:18:36.016 "write_zeroes": true, 00:18:36.016 "zcopy": false, 00:18:36.016 "get_zone_info": false, 00:18:36.016 "zone_management": false, 00:18:36.016 "zone_append": false, 00:18:36.016 "compare": false, 00:18:36.016 "compare_and_write": false, 00:18:36.016 "abort": false, 00:18:36.016 "seek_hole": false, 00:18:36.016 "seek_data": false, 00:18:36.016 "copy": false, 00:18:36.016 "nvme_iov_md": false 00:18:36.016 }, 00:18:36.016 "driver_specific": { 00:18:36.016 "raid": { 00:18:36.016 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:36.016 "strip_size_kb": 64, 00:18:36.016 "state": "online", 00:18:36.016 "raid_level": "raid5f", 00:18:36.016 "superblock": true, 00:18:36.016 "num_base_bdevs": 3, 00:18:36.016 "num_base_bdevs_discovered": 3, 00:18:36.016 "num_base_bdevs_operational": 3, 00:18:36.016 "base_bdevs_list": [ 00:18:36.016 { 00:18:36.016 "name": "pt1", 00:18:36.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.016 "is_configured": true, 00:18:36.016 "data_offset": 2048, 00:18:36.016 "data_size": 63488 00:18:36.016 }, 00:18:36.016 { 00:18:36.016 "name": "pt2", 00:18:36.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.016 "is_configured": true, 00:18:36.016 "data_offset": 2048, 00:18:36.016 "data_size": 63488 00:18:36.016 }, 00:18:36.016 { 00:18:36.016 "name": "pt3", 00:18:36.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.016 "is_configured": true, 00:18:36.016 "data_offset": 2048, 00:18:36.016 "data_size": 63488 00:18:36.016 } 00:18:36.016 ] 00:18:36.016 } 00:18:36.016 } 00:18:36.016 }' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:36.016 pt2 00:18:36.016 pt3' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.016 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.276 [2024-11-26 20:31:29.655517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c15de6bd-70c7-420b-ab3a-b44cf83a1cef 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c15de6bd-70c7-420b-ab3a-b44cf83a1cef ']' 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.276 [2024-11-26 20:31:29.683260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.276 [2024-11-26 20:31:29.683302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.276 [2024-11-26 20:31:29.683416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.276 [2024-11-26 20:31:29.683541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.276 [2024-11-26 20:31:29.683569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.276 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.277 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.277 [2024-11-26 20:31:29.827085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:36.536 [2024-11-26 20:31:29.829311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:36.536 [2024-11-26 20:31:29.829382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:36.536 [2024-11-26 20:31:29.829444] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:36.536 [2024-11-26 20:31:29.829505] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:36.536 [2024-11-26 20:31:29.829528] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:36.536 [2024-11-26 20:31:29.829548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.536 [2024-11-26 20:31:29.829560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:36.536 request: 00:18:36.536 { 00:18:36.536 "name": "raid_bdev1", 00:18:36.536 "raid_level": "raid5f", 00:18:36.536 "base_bdevs": [ 00:18:36.536 "malloc1", 00:18:36.536 "malloc2", 00:18:36.536 "malloc3" 00:18:36.536 ], 00:18:36.536 "strip_size_kb": 64, 00:18:36.536 "superblock": false, 00:18:36.536 "method": "bdev_raid_create", 00:18:36.536 "req_id": 1 00:18:36.536 } 00:18:36.536 Got JSON-RPC error response 00:18:36.536 response: 00:18:36.536 { 00:18:36.536 "code": -17, 00:18:36.536 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:36.536 } 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.536 [2024-11-26 20:31:29.894891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.536 [2024-11-26 20:31:29.894965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.536 [2024-11-26 20:31:29.894987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:36.536 [2024-11-26 20:31:29.894998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.536 [2024-11-26 20:31:29.897458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.536 [2024-11-26 20:31:29.897497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.536 [2024-11-26 20:31:29.897596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:36.536 [2024-11-26 20:31:29.897673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.536 pt1 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:36.536 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.537 "name": "raid_bdev1", 00:18:36.537 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:36.537 "strip_size_kb": 64, 00:18:36.537 "state": "configuring", 00:18:36.537 "raid_level": "raid5f", 00:18:36.537 "superblock": true, 00:18:36.537 "num_base_bdevs": 3, 00:18:36.537 "num_base_bdevs_discovered": 1, 00:18:36.537 "num_base_bdevs_operational": 3, 00:18:36.537 "base_bdevs_list": [ 00:18:36.537 { 00:18:36.537 "name": "pt1", 00:18:36.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.537 "is_configured": true, 00:18:36.537 "data_offset": 2048, 00:18:36.537 "data_size": 63488 00:18:36.537 }, 00:18:36.537 { 00:18:36.537 "name": null, 00:18:36.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.537 "is_configured": false, 00:18:36.537 "data_offset": 2048, 00:18:36.537 "data_size": 63488 00:18:36.537 }, 00:18:36.537 { 00:18:36.537 "name": null, 00:18:36.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.537 "is_configured": false, 00:18:36.537 "data_offset": 2048, 00:18:36.537 "data_size": 63488 00:18:36.537 } 00:18:36.537 ] 00:18:36.537 }' 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.537 20:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.105 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:37.105 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.105 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.105 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.105 [2024-11-26 20:31:30.374120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.105 [2024-11-26 20:31:30.374198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.106 [2024-11-26 20:31:30.374226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:37.106 [2024-11-26 20:31:30.374237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.106 [2024-11-26 20:31:30.374740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.106 [2024-11-26 20:31:30.374780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.106 [2024-11-26 20:31:30.374882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:37.106 [2024-11-26 20:31:30.374918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.106 pt2 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.106 [2024-11-26 20:31:30.386102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.106 "name": "raid_bdev1", 00:18:37.106 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:37.106 "strip_size_kb": 64, 00:18:37.106 "state": "configuring", 00:18:37.106 "raid_level": "raid5f", 00:18:37.106 "superblock": true, 00:18:37.106 "num_base_bdevs": 3, 00:18:37.106 "num_base_bdevs_discovered": 1, 00:18:37.106 "num_base_bdevs_operational": 3, 00:18:37.106 "base_bdevs_list": [ 00:18:37.106 { 00:18:37.106 "name": "pt1", 00:18:37.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.106 "is_configured": true, 00:18:37.106 "data_offset": 2048, 00:18:37.106 "data_size": 63488 00:18:37.106 }, 00:18:37.106 { 00:18:37.106 "name": null, 00:18:37.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.106 "is_configured": false, 00:18:37.106 "data_offset": 0, 00:18:37.106 "data_size": 63488 00:18:37.106 }, 00:18:37.106 { 00:18:37.106 "name": null, 00:18:37.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.106 "is_configured": false, 00:18:37.106 "data_offset": 2048, 00:18:37.106 "data_size": 63488 00:18:37.106 } 00:18:37.106 ] 00:18:37.106 }' 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.106 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.366 [2024-11-26 20:31:30.893261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.366 [2024-11-26 20:31:30.893341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.366 [2024-11-26 20:31:30.893362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:37.366 [2024-11-26 20:31:30.893375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.366 [2024-11-26 20:31:30.893930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.366 [2024-11-26 20:31:30.893965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.366 [2024-11-26 20:31:30.894058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:37.366 [2024-11-26 20:31:30.894090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.366 pt2 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.366 [2024-11-26 20:31:30.905259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:37.366 [2024-11-26 20:31:30.905351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.366 [2024-11-26 20:31:30.905370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:37.366 [2024-11-26 20:31:30.905382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.366 [2024-11-26 20:31:30.905869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.366 [2024-11-26 20:31:30.905903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:37.366 [2024-11-26 20:31:30.905992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:37.366 [2024-11-26 20:31:30.906020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.366 [2024-11-26 20:31:30.906171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:37.366 [2024-11-26 20:31:30.906195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:37.366 [2024-11-26 20:31:30.906478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:37.366 [2024-11-26 20:31:30.912410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:37.366 [2024-11-26 20:31:30.912439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:37.366 [2024-11-26 20:31:30.912660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.366 pt3 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.366 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.625 "name": "raid_bdev1", 00:18:37.625 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:37.625 "strip_size_kb": 64, 00:18:37.625 "state": "online", 00:18:37.625 "raid_level": "raid5f", 00:18:37.625 "superblock": true, 00:18:37.625 "num_base_bdevs": 3, 00:18:37.625 "num_base_bdevs_discovered": 3, 00:18:37.625 "num_base_bdevs_operational": 3, 00:18:37.625 "base_bdevs_list": [ 00:18:37.625 { 00:18:37.625 "name": "pt1", 00:18:37.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.625 "is_configured": true, 00:18:37.625 "data_offset": 2048, 00:18:37.625 "data_size": 63488 00:18:37.625 }, 00:18:37.625 { 00:18:37.625 "name": "pt2", 00:18:37.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.625 "is_configured": true, 00:18:37.625 "data_offset": 2048, 00:18:37.625 "data_size": 63488 00:18:37.625 }, 00:18:37.625 { 00:18:37.625 "name": "pt3", 00:18:37.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.625 "is_configured": true, 00:18:37.625 "data_offset": 2048, 00:18:37.625 "data_size": 63488 00:18:37.625 } 00:18:37.625 ] 00:18:37.625 }' 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.625 20:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.883 [2024-11-26 20:31:31.383741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.883 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.883 "name": "raid_bdev1", 00:18:37.883 "aliases": [ 00:18:37.883 "c15de6bd-70c7-420b-ab3a-b44cf83a1cef" 00:18:37.883 ], 00:18:37.883 "product_name": "Raid Volume", 00:18:37.883 "block_size": 512, 00:18:37.883 "num_blocks": 126976, 00:18:37.883 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:37.883 "assigned_rate_limits": { 00:18:37.883 "rw_ios_per_sec": 0, 00:18:37.883 "rw_mbytes_per_sec": 0, 00:18:37.883 "r_mbytes_per_sec": 0, 00:18:37.883 "w_mbytes_per_sec": 0 00:18:37.883 }, 00:18:37.883 "claimed": false, 00:18:37.883 "zoned": false, 00:18:37.883 "supported_io_types": { 00:18:37.883 "read": true, 00:18:37.883 "write": true, 00:18:37.883 "unmap": false, 00:18:37.883 "flush": false, 00:18:37.883 "reset": true, 00:18:37.883 "nvme_admin": false, 00:18:37.883 "nvme_io": false, 00:18:37.883 "nvme_io_md": false, 00:18:37.883 "write_zeroes": true, 00:18:37.883 "zcopy": false, 00:18:37.883 "get_zone_info": false, 00:18:37.884 "zone_management": false, 00:18:37.884 "zone_append": false, 00:18:37.884 "compare": false, 00:18:37.884 "compare_and_write": false, 00:18:37.884 "abort": false, 00:18:37.884 "seek_hole": false, 00:18:37.884 "seek_data": false, 00:18:37.884 "copy": false, 00:18:37.884 "nvme_iov_md": false 00:18:37.884 }, 00:18:37.884 "driver_specific": { 00:18:37.884 "raid": { 00:18:37.884 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:37.884 "strip_size_kb": 64, 00:18:37.884 "state": "online", 00:18:37.884 "raid_level": "raid5f", 00:18:37.884 "superblock": true, 00:18:37.884 "num_base_bdevs": 3, 00:18:37.884 "num_base_bdevs_discovered": 3, 00:18:37.884 "num_base_bdevs_operational": 3, 00:18:37.884 "base_bdevs_list": [ 00:18:37.884 { 00:18:37.884 "name": "pt1", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.884 "is_configured": true, 00:18:37.884 "data_offset": 2048, 00:18:37.884 "data_size": 63488 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "name": "pt2", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.884 "is_configured": true, 00:18:37.884 "data_offset": 2048, 00:18:37.884 "data_size": 63488 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "name": "pt3", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.884 "is_configured": true, 00:18:37.884 "data_offset": 2048, 00:18:37.884 "data_size": 63488 00:18:37.884 } 00:18:37.884 ] 00:18:37.884 } 00:18:37.884 } 00:18:37.884 }' 00:18:37.884 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:38.143 pt2 00:18:38.143 pt3' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.143 [2024-11-26 20:31:31.675248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.143 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.402 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c15de6bd-70c7-420b-ab3a-b44cf83a1cef '!=' c15de6bd-70c7-420b-ab3a-b44cf83a1cef ']' 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.403 [2024-11-26 20:31:31.722989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.403 "name": "raid_bdev1", 00:18:38.403 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:38.403 "strip_size_kb": 64, 00:18:38.403 "state": "online", 00:18:38.403 "raid_level": "raid5f", 00:18:38.403 "superblock": true, 00:18:38.403 "num_base_bdevs": 3, 00:18:38.403 "num_base_bdevs_discovered": 2, 00:18:38.403 "num_base_bdevs_operational": 2, 00:18:38.403 "base_bdevs_list": [ 00:18:38.403 { 00:18:38.403 "name": null, 00:18:38.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.403 "is_configured": false, 00:18:38.403 "data_offset": 0, 00:18:38.403 "data_size": 63488 00:18:38.403 }, 00:18:38.403 { 00:18:38.403 "name": "pt2", 00:18:38.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.403 "is_configured": true, 00:18:38.403 "data_offset": 2048, 00:18:38.403 "data_size": 63488 00:18:38.403 }, 00:18:38.403 { 00:18:38.403 "name": "pt3", 00:18:38.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.403 "is_configured": true, 00:18:38.403 "data_offset": 2048, 00:18:38.403 "data_size": 63488 00:18:38.403 } 00:18:38.403 ] 00:18:38.403 }' 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.403 20:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.661 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.661 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.661 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.661 [2024-11-26 20:31:32.194137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.661 [2024-11-26 20:31:32.194176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.661 [2024-11-26 20:31:32.194282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.661 [2024-11-26 20:31:32.194352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.661 [2024-11-26 20:31:32.194368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:38.661 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.661 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.661 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:38.662 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.662 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.662 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.921 [2024-11-26 20:31:32.281950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.921 [2024-11-26 20:31:32.282019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.921 [2024-11-26 20:31:32.282038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:38.921 [2024-11-26 20:31:32.282076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.921 [2024-11-26 20:31:32.284446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.921 [2024-11-26 20:31:32.284487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.921 [2024-11-26 20:31:32.284578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.921 [2024-11-26 20:31:32.284639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.921 pt2 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.921 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.921 "name": "raid_bdev1", 00:18:38.921 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:38.921 "strip_size_kb": 64, 00:18:38.921 "state": "configuring", 00:18:38.921 "raid_level": "raid5f", 00:18:38.921 "superblock": true, 00:18:38.921 "num_base_bdevs": 3, 00:18:38.921 "num_base_bdevs_discovered": 1, 00:18:38.921 "num_base_bdevs_operational": 2, 00:18:38.921 "base_bdevs_list": [ 00:18:38.921 { 00:18:38.921 "name": null, 00:18:38.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.921 "is_configured": false, 00:18:38.921 "data_offset": 2048, 00:18:38.921 "data_size": 63488 00:18:38.921 }, 00:18:38.921 { 00:18:38.921 "name": "pt2", 00:18:38.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.921 "is_configured": true, 00:18:38.921 "data_offset": 2048, 00:18:38.921 "data_size": 63488 00:18:38.921 }, 00:18:38.921 { 00:18:38.922 "name": null, 00:18:38.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.922 "is_configured": false, 00:18:38.922 "data_offset": 2048, 00:18:38.922 "data_size": 63488 00:18:38.922 } 00:18:38.922 ] 00:18:38.922 }' 00:18:38.922 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.922 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.490 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:39.490 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:39.490 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:39.490 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:39.490 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.490 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.490 [2024-11-26 20:31:32.765171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:39.490 [2024-11-26 20:31:32.765289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.490 [2024-11-26 20:31:32.765315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:39.490 [2024-11-26 20:31:32.765329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.490 [2024-11-26 20:31:32.765864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.490 [2024-11-26 20:31:32.765897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:39.490 [2024-11-26 20:31:32.765998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:39.491 [2024-11-26 20:31:32.766034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:39.491 [2024-11-26 20:31:32.766174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:39.491 [2024-11-26 20:31:32.766192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:39.491 [2024-11-26 20:31:32.766498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:39.491 [2024-11-26 20:31:32.772823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:39.491 [2024-11-26 20:31:32.772852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:39.491 [2024-11-26 20:31:32.773305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.491 pt3 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.491 "name": "raid_bdev1", 00:18:39.491 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:39.491 "strip_size_kb": 64, 00:18:39.491 "state": "online", 00:18:39.491 "raid_level": "raid5f", 00:18:39.491 "superblock": true, 00:18:39.491 "num_base_bdevs": 3, 00:18:39.491 "num_base_bdevs_discovered": 2, 00:18:39.491 "num_base_bdevs_operational": 2, 00:18:39.491 "base_bdevs_list": [ 00:18:39.491 { 00:18:39.491 "name": null, 00:18:39.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.491 "is_configured": false, 00:18:39.491 "data_offset": 2048, 00:18:39.491 "data_size": 63488 00:18:39.491 }, 00:18:39.491 { 00:18:39.491 "name": "pt2", 00:18:39.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.491 "is_configured": true, 00:18:39.491 "data_offset": 2048, 00:18:39.491 "data_size": 63488 00:18:39.491 }, 00:18:39.491 { 00:18:39.491 "name": "pt3", 00:18:39.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.491 "is_configured": true, 00:18:39.491 "data_offset": 2048, 00:18:39.491 "data_size": 63488 00:18:39.491 } 00:18:39.491 ] 00:18:39.491 }' 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.491 20:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.751 [2024-11-26 20:31:33.253390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.751 [2024-11-26 20:31:33.253432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.751 [2024-11-26 20:31:33.253524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.751 [2024-11-26 20:31:33.253601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.751 [2024-11-26 20:31:33.253618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:39.751 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 [2024-11-26 20:31:33.329377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.011 [2024-11-26 20:31:33.329446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.011 [2024-11-26 20:31:33.329470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:40.011 [2024-11-26 20:31:33.329482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.011 [2024-11-26 20:31:33.332347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.011 [2024-11-26 20:31:33.332382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.011 [2024-11-26 20:31:33.332492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.011 [2024-11-26 20:31:33.332557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.011 [2024-11-26 20:31:33.332772] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:40.011 [2024-11-26 20:31:33.332798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.011 [2024-11-26 20:31:33.332820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:40.011 [2024-11-26 20:31:33.332897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.011 pt1 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.011 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.012 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.012 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.012 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.012 "name": "raid_bdev1", 00:18:40.012 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:40.012 "strip_size_kb": 64, 00:18:40.012 "state": "configuring", 00:18:40.012 "raid_level": "raid5f", 00:18:40.012 "superblock": true, 00:18:40.012 "num_base_bdevs": 3, 00:18:40.012 "num_base_bdevs_discovered": 1, 00:18:40.012 "num_base_bdevs_operational": 2, 00:18:40.012 "base_bdevs_list": [ 00:18:40.012 { 00:18:40.012 "name": null, 00:18:40.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.012 "is_configured": false, 00:18:40.012 "data_offset": 2048, 00:18:40.012 "data_size": 63488 00:18:40.012 }, 00:18:40.012 { 00:18:40.012 "name": "pt2", 00:18:40.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.012 "is_configured": true, 00:18:40.012 "data_offset": 2048, 00:18:40.012 "data_size": 63488 00:18:40.012 }, 00:18:40.012 { 00:18:40.012 "name": null, 00:18:40.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.012 "is_configured": false, 00:18:40.012 "data_offset": 2048, 00:18:40.012 "data_size": 63488 00:18:40.012 } 00:18:40.012 ] 00:18:40.012 }' 00:18:40.012 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.012 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.271 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:40.271 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.271 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:40.271 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.271 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.530 [2024-11-26 20:31:33.860771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:40.530 [2024-11-26 20:31:33.860860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.530 [2024-11-26 20:31:33.860884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:40.530 [2024-11-26 20:31:33.860896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.530 [2024-11-26 20:31:33.861519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.530 [2024-11-26 20:31:33.861552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:40.530 [2024-11-26 20:31:33.861657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:40.530 [2024-11-26 20:31:33.861690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:40.530 [2024-11-26 20:31:33.861861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:40.530 [2024-11-26 20:31:33.861882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:40.530 [2024-11-26 20:31:33.862193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:40.530 [2024-11-26 20:31:33.869288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:40.530 [2024-11-26 20:31:33.869324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:40.530 [2024-11-26 20:31:33.869645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.530 pt3 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.530 "name": "raid_bdev1", 00:18:40.530 "uuid": "c15de6bd-70c7-420b-ab3a-b44cf83a1cef", 00:18:40.530 "strip_size_kb": 64, 00:18:40.530 "state": "online", 00:18:40.530 "raid_level": "raid5f", 00:18:40.530 "superblock": true, 00:18:40.530 "num_base_bdevs": 3, 00:18:40.530 "num_base_bdevs_discovered": 2, 00:18:40.530 "num_base_bdevs_operational": 2, 00:18:40.530 "base_bdevs_list": [ 00:18:40.530 { 00:18:40.530 "name": null, 00:18:40.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.530 "is_configured": false, 00:18:40.530 "data_offset": 2048, 00:18:40.530 "data_size": 63488 00:18:40.530 }, 00:18:40.530 { 00:18:40.530 "name": "pt2", 00:18:40.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.530 "is_configured": true, 00:18:40.530 "data_offset": 2048, 00:18:40.530 "data_size": 63488 00:18:40.530 }, 00:18:40.530 { 00:18:40.530 "name": "pt3", 00:18:40.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.530 "is_configured": true, 00:18:40.530 "data_offset": 2048, 00:18:40.530 "data_size": 63488 00:18:40.530 } 00:18:40.530 ] 00:18:40.530 }' 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.530 20:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.789 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:40.789 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:40.789 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.789 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.789 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.050 [2024-11-26 20:31:34.373543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c15de6bd-70c7-420b-ab3a-b44cf83a1cef '!=' c15de6bd-70c7-420b-ab3a-b44cf83a1cef ']' 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81572 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81572 ']' 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81572 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81572 00:18:41.050 killing process with pid 81572 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81572' 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81572 00:18:41.050 [2024-11-26 20:31:34.438838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.050 [2024-11-26 20:31:34.438940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.050 20:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81572 00:18:41.050 [2024-11-26 20:31:34.439015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.050 [2024-11-26 20:31:34.439030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:41.309 [2024-11-26 20:31:34.786969] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.696 20:31:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:42.696 00:18:42.696 real 0m8.400s 00:18:42.696 user 0m13.100s 00:18:42.696 sys 0m1.473s 00:18:42.696 20:31:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.696 20:31:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.696 ************************************ 00:18:42.696 END TEST raid5f_superblock_test 00:18:42.696 ************************************ 00:18:42.696 20:31:36 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:42.696 20:31:36 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:42.696 20:31:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:42.696 20:31:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.696 20:31:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.696 ************************************ 00:18:42.696 START TEST raid5f_rebuild_test 00:18:42.696 ************************************ 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82024 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82024 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82024 ']' 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.696 20:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.956 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:42.956 Zero copy mechanism will not be used. 00:18:42.956 [2024-11-26 20:31:36.252939] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:18:42.956 [2024-11-26 20:31:36.253074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82024 ] 00:18:42.956 [2024-11-26 20:31:36.433360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.217 [2024-11-26 20:31:36.566339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.476 [2024-11-26 20:31:36.796124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.476 [2024-11-26 20:31:36.796206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 BaseBdev1_malloc 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 [2024-11-26 20:31:37.218992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:43.737 [2024-11-26 20:31:37.219092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.737 [2024-11-26 20:31:37.219122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:43.737 [2024-11-26 20:31:37.219138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.737 [2024-11-26 20:31:37.221860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.737 [2024-11-26 20:31:37.221927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:43.737 BaseBdev1 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 BaseBdev2_malloc 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 [2024-11-26 20:31:37.278514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:43.737 [2024-11-26 20:31:37.278608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.737 [2024-11-26 20:31:37.278636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:43.737 [2024-11-26 20:31:37.278649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.737 [2024-11-26 20:31:37.281119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.737 [2024-11-26 20:31:37.281165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:43.737 BaseBdev2 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.737 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 BaseBdev3_malloc 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 [2024-11-26 20:31:37.349651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:43.997 [2024-11-26 20:31:37.349728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.997 [2024-11-26 20:31:37.349755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:43.997 [2024-11-26 20:31:37.349767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.997 [2024-11-26 20:31:37.352172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.997 [2024-11-26 20:31:37.352217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:43.997 BaseBdev3 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 spare_malloc 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 spare_delay 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 [2024-11-26 20:31:37.416839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:43.997 [2024-11-26 20:31:37.416915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.997 [2024-11-26 20:31:37.416939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:43.997 [2024-11-26 20:31:37.416952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.997 [2024-11-26 20:31:37.419417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.997 [2024-11-26 20:31:37.419466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:43.997 spare 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 [2024-11-26 20:31:37.424888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.997 [2024-11-26 20:31:37.426962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.997 [2024-11-26 20:31:37.427035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.997 [2024-11-26 20:31:37.427128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:43.997 [2024-11-26 20:31:37.427140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:43.997 [2024-11-26 20:31:37.427490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.997 [2024-11-26 20:31:37.433769] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:43.997 [2024-11-26 20:31:37.433800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:43.997 [2024-11-26 20:31:37.434067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.997 "name": "raid_bdev1", 00:18:43.997 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:43.997 "strip_size_kb": 64, 00:18:43.997 "state": "online", 00:18:43.997 "raid_level": "raid5f", 00:18:43.997 "superblock": false, 00:18:43.997 "num_base_bdevs": 3, 00:18:43.997 "num_base_bdevs_discovered": 3, 00:18:43.997 "num_base_bdevs_operational": 3, 00:18:43.997 "base_bdevs_list": [ 00:18:43.997 { 00:18:43.997 "name": "BaseBdev1", 00:18:43.997 "uuid": "43104c03-81e4-5534-b9f2-a9f1d8d17ec1", 00:18:43.997 "is_configured": true, 00:18:43.997 "data_offset": 0, 00:18:43.997 "data_size": 65536 00:18:43.997 }, 00:18:43.997 { 00:18:43.997 "name": "BaseBdev2", 00:18:43.997 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:43.997 "is_configured": true, 00:18:43.997 "data_offset": 0, 00:18:43.997 "data_size": 65536 00:18:43.997 }, 00:18:43.997 { 00:18:43.997 "name": "BaseBdev3", 00:18:43.997 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:43.997 "is_configured": true, 00:18:43.997 "data_offset": 0, 00:18:43.997 "data_size": 65536 00:18:43.997 } 00:18:43.997 ] 00:18:43.997 }' 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.997 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.566 [2024-11-26 20:31:37.861364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.566 20:31:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:44.848 [2024-11-26 20:31:38.168874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:44.848 /dev/nbd0 00:18:44.848 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:44.848 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:44.848 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.849 1+0 records in 00:18:44.849 1+0 records out 00:18:44.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00236486 s, 1.7 MB/s 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:44.849 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:45.436 512+0 records in 00:18:45.436 512+0 records out 00:18:45.436 67108864 bytes (67 MB, 64 MiB) copied, 0.465288 s, 144 MB/s 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.436 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:45.436 [2024-11-26 20:31:38.982738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.695 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.695 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.695 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.695 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.695 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.695 20:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.695 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:45.695 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.696 [2024-11-26 20:31:39.007884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.696 "name": "raid_bdev1", 00:18:45.696 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:45.696 "strip_size_kb": 64, 00:18:45.696 "state": "online", 00:18:45.696 "raid_level": "raid5f", 00:18:45.696 "superblock": false, 00:18:45.696 "num_base_bdevs": 3, 00:18:45.696 "num_base_bdevs_discovered": 2, 00:18:45.696 "num_base_bdevs_operational": 2, 00:18:45.696 "base_bdevs_list": [ 00:18:45.696 { 00:18:45.696 "name": null, 00:18:45.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.696 "is_configured": false, 00:18:45.696 "data_offset": 0, 00:18:45.696 "data_size": 65536 00:18:45.696 }, 00:18:45.696 { 00:18:45.696 "name": "BaseBdev2", 00:18:45.696 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:45.696 "is_configured": true, 00:18:45.696 "data_offset": 0, 00:18:45.696 "data_size": 65536 00:18:45.696 }, 00:18:45.696 { 00:18:45.696 "name": "BaseBdev3", 00:18:45.696 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:45.696 "is_configured": true, 00:18:45.696 "data_offset": 0, 00:18:45.696 "data_size": 65536 00:18:45.696 } 00:18:45.696 ] 00:18:45.696 }' 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.696 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.954 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.954 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.955 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.955 [2024-11-26 20:31:39.491066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.214 [2024-11-26 20:31:39.513098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:46.214 20:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.214 20:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:46.214 [2024-11-26 20:31:39.523602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.151 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.151 "name": "raid_bdev1", 00:18:47.151 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:47.151 "strip_size_kb": 64, 00:18:47.151 "state": "online", 00:18:47.151 "raid_level": "raid5f", 00:18:47.151 "superblock": false, 00:18:47.151 "num_base_bdevs": 3, 00:18:47.151 "num_base_bdevs_discovered": 3, 00:18:47.151 "num_base_bdevs_operational": 3, 00:18:47.151 "process": { 00:18:47.151 "type": "rebuild", 00:18:47.151 "target": "spare", 00:18:47.151 "progress": { 00:18:47.151 "blocks": 18432, 00:18:47.151 "percent": 14 00:18:47.151 } 00:18:47.151 }, 00:18:47.151 "base_bdevs_list": [ 00:18:47.151 { 00:18:47.151 "name": "spare", 00:18:47.151 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:47.151 "is_configured": true, 00:18:47.151 "data_offset": 0, 00:18:47.151 "data_size": 65536 00:18:47.151 }, 00:18:47.151 { 00:18:47.151 "name": "BaseBdev2", 00:18:47.151 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:47.151 "is_configured": true, 00:18:47.151 "data_offset": 0, 00:18:47.151 "data_size": 65536 00:18:47.151 }, 00:18:47.151 { 00:18:47.151 "name": "BaseBdev3", 00:18:47.152 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:47.152 "is_configured": true, 00:18:47.152 "data_offset": 0, 00:18:47.152 "data_size": 65536 00:18:47.152 } 00:18:47.152 ] 00:18:47.152 }' 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.152 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.152 [2024-11-26 20:31:40.651837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.410 [2024-11-26 20:31:40.735992] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.410 [2024-11-26 20:31:40.736118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.410 [2024-11-26 20:31:40.736143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.410 [2024-11-26 20:31:40.736153] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.410 "name": "raid_bdev1", 00:18:47.410 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:47.410 "strip_size_kb": 64, 00:18:47.410 "state": "online", 00:18:47.410 "raid_level": "raid5f", 00:18:47.410 "superblock": false, 00:18:47.410 "num_base_bdevs": 3, 00:18:47.410 "num_base_bdevs_discovered": 2, 00:18:47.410 "num_base_bdevs_operational": 2, 00:18:47.410 "base_bdevs_list": [ 00:18:47.410 { 00:18:47.410 "name": null, 00:18:47.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.410 "is_configured": false, 00:18:47.410 "data_offset": 0, 00:18:47.410 "data_size": 65536 00:18:47.410 }, 00:18:47.410 { 00:18:47.410 "name": "BaseBdev2", 00:18:47.410 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:47.410 "is_configured": true, 00:18:47.410 "data_offset": 0, 00:18:47.410 "data_size": 65536 00:18:47.410 }, 00:18:47.410 { 00:18:47.410 "name": "BaseBdev3", 00:18:47.410 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:47.410 "is_configured": true, 00:18:47.410 "data_offset": 0, 00:18:47.410 "data_size": 65536 00:18:47.410 } 00:18:47.410 ] 00:18:47.410 }' 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.410 20:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.979 "name": "raid_bdev1", 00:18:47.979 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:47.979 "strip_size_kb": 64, 00:18:47.979 "state": "online", 00:18:47.979 "raid_level": "raid5f", 00:18:47.979 "superblock": false, 00:18:47.979 "num_base_bdevs": 3, 00:18:47.979 "num_base_bdevs_discovered": 2, 00:18:47.979 "num_base_bdevs_operational": 2, 00:18:47.979 "base_bdevs_list": [ 00:18:47.979 { 00:18:47.979 "name": null, 00:18:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.979 "is_configured": false, 00:18:47.979 "data_offset": 0, 00:18:47.979 "data_size": 65536 00:18:47.979 }, 00:18:47.979 { 00:18:47.979 "name": "BaseBdev2", 00:18:47.979 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:47.979 "is_configured": true, 00:18:47.979 "data_offset": 0, 00:18:47.979 "data_size": 65536 00:18:47.979 }, 00:18:47.979 { 00:18:47.979 "name": "BaseBdev3", 00:18:47.979 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:47.979 "is_configured": true, 00:18:47.979 "data_offset": 0, 00:18:47.979 "data_size": 65536 00:18:47.979 } 00:18:47.979 ] 00:18:47.979 }' 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.979 [2024-11-26 20:31:41.369538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.979 [2024-11-26 20:31:41.389158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.979 20:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:47.979 [2024-11-26 20:31:41.398847] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.915 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.915 "name": "raid_bdev1", 00:18:48.915 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:48.915 "strip_size_kb": 64, 00:18:48.915 "state": "online", 00:18:48.915 "raid_level": "raid5f", 00:18:48.915 "superblock": false, 00:18:48.915 "num_base_bdevs": 3, 00:18:48.915 "num_base_bdevs_discovered": 3, 00:18:48.915 "num_base_bdevs_operational": 3, 00:18:48.915 "process": { 00:18:48.915 "type": "rebuild", 00:18:48.915 "target": "spare", 00:18:48.915 "progress": { 00:18:48.915 "blocks": 20480, 00:18:48.915 "percent": 15 00:18:48.915 } 00:18:48.915 }, 00:18:48.915 "base_bdevs_list": [ 00:18:48.915 { 00:18:48.915 "name": "spare", 00:18:48.915 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:48.915 "is_configured": true, 00:18:48.915 "data_offset": 0, 00:18:48.915 "data_size": 65536 00:18:48.915 }, 00:18:48.915 { 00:18:48.915 "name": "BaseBdev2", 00:18:48.915 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:48.915 "is_configured": true, 00:18:48.915 "data_offset": 0, 00:18:48.915 "data_size": 65536 00:18:48.915 }, 00:18:48.915 { 00:18:48.915 "name": "BaseBdev3", 00:18:48.916 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:48.916 "is_configured": true, 00:18:48.916 "data_offset": 0, 00:18:48.916 "data_size": 65536 00:18:48.916 } 00:18:48.916 ] 00:18:48.916 }' 00:18:48.916 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=575 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.174 "name": "raid_bdev1", 00:18:49.174 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:49.174 "strip_size_kb": 64, 00:18:49.174 "state": "online", 00:18:49.174 "raid_level": "raid5f", 00:18:49.174 "superblock": false, 00:18:49.174 "num_base_bdevs": 3, 00:18:49.174 "num_base_bdevs_discovered": 3, 00:18:49.174 "num_base_bdevs_operational": 3, 00:18:49.174 "process": { 00:18:49.174 "type": "rebuild", 00:18:49.174 "target": "spare", 00:18:49.174 "progress": { 00:18:49.174 "blocks": 22528, 00:18:49.174 "percent": 17 00:18:49.174 } 00:18:49.174 }, 00:18:49.174 "base_bdevs_list": [ 00:18:49.174 { 00:18:49.174 "name": "spare", 00:18:49.174 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:49.174 "is_configured": true, 00:18:49.174 "data_offset": 0, 00:18:49.174 "data_size": 65536 00:18:49.174 }, 00:18:49.174 { 00:18:49.174 "name": "BaseBdev2", 00:18:49.174 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:49.174 "is_configured": true, 00:18:49.174 "data_offset": 0, 00:18:49.174 "data_size": 65536 00:18:49.174 }, 00:18:49.174 { 00:18:49.174 "name": "BaseBdev3", 00:18:49.174 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:49.174 "is_configured": true, 00:18:49.174 "data_offset": 0, 00:18:49.174 "data_size": 65536 00:18:49.174 } 00:18:49.174 ] 00:18:49.174 }' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.174 20:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.550 "name": "raid_bdev1", 00:18:50.550 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:50.550 "strip_size_kb": 64, 00:18:50.550 "state": "online", 00:18:50.550 "raid_level": "raid5f", 00:18:50.550 "superblock": false, 00:18:50.550 "num_base_bdevs": 3, 00:18:50.550 "num_base_bdevs_discovered": 3, 00:18:50.550 "num_base_bdevs_operational": 3, 00:18:50.550 "process": { 00:18:50.550 "type": "rebuild", 00:18:50.550 "target": "spare", 00:18:50.550 "progress": { 00:18:50.550 "blocks": 45056, 00:18:50.550 "percent": 34 00:18:50.550 } 00:18:50.550 }, 00:18:50.550 "base_bdevs_list": [ 00:18:50.550 { 00:18:50.550 "name": "spare", 00:18:50.550 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:50.550 "is_configured": true, 00:18:50.550 "data_offset": 0, 00:18:50.550 "data_size": 65536 00:18:50.550 }, 00:18:50.550 { 00:18:50.550 "name": "BaseBdev2", 00:18:50.550 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:50.550 "is_configured": true, 00:18:50.550 "data_offset": 0, 00:18:50.550 "data_size": 65536 00:18:50.550 }, 00:18:50.550 { 00:18:50.550 "name": "BaseBdev3", 00:18:50.550 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:50.550 "is_configured": true, 00:18:50.550 "data_offset": 0, 00:18:50.550 "data_size": 65536 00:18:50.550 } 00:18:50.550 ] 00:18:50.550 }' 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.550 20:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.488 "name": "raid_bdev1", 00:18:51.488 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:51.488 "strip_size_kb": 64, 00:18:51.488 "state": "online", 00:18:51.488 "raid_level": "raid5f", 00:18:51.488 "superblock": false, 00:18:51.488 "num_base_bdevs": 3, 00:18:51.488 "num_base_bdevs_discovered": 3, 00:18:51.488 "num_base_bdevs_operational": 3, 00:18:51.488 "process": { 00:18:51.488 "type": "rebuild", 00:18:51.488 "target": "spare", 00:18:51.488 "progress": { 00:18:51.488 "blocks": 69632, 00:18:51.488 "percent": 53 00:18:51.488 } 00:18:51.488 }, 00:18:51.488 "base_bdevs_list": [ 00:18:51.488 { 00:18:51.488 "name": "spare", 00:18:51.488 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:51.488 "is_configured": true, 00:18:51.488 "data_offset": 0, 00:18:51.488 "data_size": 65536 00:18:51.488 }, 00:18:51.488 { 00:18:51.488 "name": "BaseBdev2", 00:18:51.488 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:51.488 "is_configured": true, 00:18:51.488 "data_offset": 0, 00:18:51.488 "data_size": 65536 00:18:51.488 }, 00:18:51.488 { 00:18:51.488 "name": "BaseBdev3", 00:18:51.488 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:51.488 "is_configured": true, 00:18:51.488 "data_offset": 0, 00:18:51.488 "data_size": 65536 00:18:51.488 } 00:18:51.488 ] 00:18:51.488 }' 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.488 20:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.867 "name": "raid_bdev1", 00:18:52.867 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:52.867 "strip_size_kb": 64, 00:18:52.867 "state": "online", 00:18:52.867 "raid_level": "raid5f", 00:18:52.867 "superblock": false, 00:18:52.867 "num_base_bdevs": 3, 00:18:52.867 "num_base_bdevs_discovered": 3, 00:18:52.867 "num_base_bdevs_operational": 3, 00:18:52.867 "process": { 00:18:52.867 "type": "rebuild", 00:18:52.867 "target": "spare", 00:18:52.867 "progress": { 00:18:52.867 "blocks": 92160, 00:18:52.867 "percent": 70 00:18:52.867 } 00:18:52.867 }, 00:18:52.867 "base_bdevs_list": [ 00:18:52.867 { 00:18:52.867 "name": "spare", 00:18:52.867 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:52.867 "is_configured": true, 00:18:52.867 "data_offset": 0, 00:18:52.867 "data_size": 65536 00:18:52.867 }, 00:18:52.867 { 00:18:52.867 "name": "BaseBdev2", 00:18:52.867 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:52.867 "is_configured": true, 00:18:52.867 "data_offset": 0, 00:18:52.867 "data_size": 65536 00:18:52.867 }, 00:18:52.867 { 00:18:52.867 "name": "BaseBdev3", 00:18:52.867 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:52.867 "is_configured": true, 00:18:52.867 "data_offset": 0, 00:18:52.867 "data_size": 65536 00:18:52.867 } 00:18:52.867 ] 00:18:52.867 }' 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.867 20:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.837 "name": "raid_bdev1", 00:18:53.837 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:53.837 "strip_size_kb": 64, 00:18:53.837 "state": "online", 00:18:53.837 "raid_level": "raid5f", 00:18:53.837 "superblock": false, 00:18:53.837 "num_base_bdevs": 3, 00:18:53.837 "num_base_bdevs_discovered": 3, 00:18:53.837 "num_base_bdevs_operational": 3, 00:18:53.837 "process": { 00:18:53.837 "type": "rebuild", 00:18:53.837 "target": "spare", 00:18:53.837 "progress": { 00:18:53.837 "blocks": 114688, 00:18:53.837 "percent": 87 00:18:53.837 } 00:18:53.837 }, 00:18:53.837 "base_bdevs_list": [ 00:18:53.837 { 00:18:53.837 "name": "spare", 00:18:53.837 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:53.837 "is_configured": true, 00:18:53.837 "data_offset": 0, 00:18:53.837 "data_size": 65536 00:18:53.837 }, 00:18:53.837 { 00:18:53.837 "name": "BaseBdev2", 00:18:53.837 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:53.837 "is_configured": true, 00:18:53.837 "data_offset": 0, 00:18:53.837 "data_size": 65536 00:18:53.837 }, 00:18:53.837 { 00:18:53.837 "name": "BaseBdev3", 00:18:53.837 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:53.837 "is_configured": true, 00:18:53.837 "data_offset": 0, 00:18:53.837 "data_size": 65536 00:18:53.837 } 00:18:53.837 ] 00:18:53.837 }' 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.837 20:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.403 [2024-11-26 20:31:47.867877] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:54.403 [2024-11-26 20:31:47.868018] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:54.403 [2024-11-26 20:31:47.868093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.722 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.981 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.981 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.981 "name": "raid_bdev1", 00:18:54.981 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:54.981 "strip_size_kb": 64, 00:18:54.981 "state": "online", 00:18:54.981 "raid_level": "raid5f", 00:18:54.981 "superblock": false, 00:18:54.981 "num_base_bdevs": 3, 00:18:54.981 "num_base_bdevs_discovered": 3, 00:18:54.981 "num_base_bdevs_operational": 3, 00:18:54.981 "base_bdevs_list": [ 00:18:54.981 { 00:18:54.981 "name": "spare", 00:18:54.981 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:54.981 "is_configured": true, 00:18:54.981 "data_offset": 0, 00:18:54.981 "data_size": 65536 00:18:54.981 }, 00:18:54.981 { 00:18:54.981 "name": "BaseBdev2", 00:18:54.981 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:54.981 "is_configured": true, 00:18:54.981 "data_offset": 0, 00:18:54.982 "data_size": 65536 00:18:54.982 }, 00:18:54.982 { 00:18:54.982 "name": "BaseBdev3", 00:18:54.982 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:54.982 "is_configured": true, 00:18:54.982 "data_offset": 0, 00:18:54.982 "data_size": 65536 00:18:54.982 } 00:18:54.982 ] 00:18:54.982 }' 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.982 "name": "raid_bdev1", 00:18:54.982 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:54.982 "strip_size_kb": 64, 00:18:54.982 "state": "online", 00:18:54.982 "raid_level": "raid5f", 00:18:54.982 "superblock": false, 00:18:54.982 "num_base_bdevs": 3, 00:18:54.982 "num_base_bdevs_discovered": 3, 00:18:54.982 "num_base_bdevs_operational": 3, 00:18:54.982 "base_bdevs_list": [ 00:18:54.982 { 00:18:54.982 "name": "spare", 00:18:54.982 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:54.982 "is_configured": true, 00:18:54.982 "data_offset": 0, 00:18:54.982 "data_size": 65536 00:18:54.982 }, 00:18:54.982 { 00:18:54.982 "name": "BaseBdev2", 00:18:54.982 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:54.982 "is_configured": true, 00:18:54.982 "data_offset": 0, 00:18:54.982 "data_size": 65536 00:18:54.982 }, 00:18:54.982 { 00:18:54.982 "name": "BaseBdev3", 00:18:54.982 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:54.982 "is_configured": true, 00:18:54.982 "data_offset": 0, 00:18:54.982 "data_size": 65536 00:18:54.982 } 00:18:54.982 ] 00:18:54.982 }' 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.982 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.240 "name": "raid_bdev1", 00:18:55.240 "uuid": "4a583c03-7938-4c5e-9284-440bcd959e04", 00:18:55.240 "strip_size_kb": 64, 00:18:55.240 "state": "online", 00:18:55.240 "raid_level": "raid5f", 00:18:55.240 "superblock": false, 00:18:55.240 "num_base_bdevs": 3, 00:18:55.240 "num_base_bdevs_discovered": 3, 00:18:55.240 "num_base_bdevs_operational": 3, 00:18:55.240 "base_bdevs_list": [ 00:18:55.240 { 00:18:55.240 "name": "spare", 00:18:55.240 "uuid": "0ba29886-3706-5011-b172-280a2a95b9f3", 00:18:55.240 "is_configured": true, 00:18:55.240 "data_offset": 0, 00:18:55.240 "data_size": 65536 00:18:55.240 }, 00:18:55.240 { 00:18:55.240 "name": "BaseBdev2", 00:18:55.240 "uuid": "495f4e04-b1ab-5572-8145-3afde2e39744", 00:18:55.240 "is_configured": true, 00:18:55.240 "data_offset": 0, 00:18:55.240 "data_size": 65536 00:18:55.240 }, 00:18:55.240 { 00:18:55.240 "name": "BaseBdev3", 00:18:55.240 "uuid": "12be5b85-8277-5a2e-8ef6-187dcb73ca65", 00:18:55.240 "is_configured": true, 00:18:55.240 "data_offset": 0, 00:18:55.240 "data_size": 65536 00:18:55.240 } 00:18:55.240 ] 00:18:55.240 }' 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.240 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 [2024-11-26 20:31:48.977900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.499 [2024-11-26 20:31:48.977940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.499 [2024-11-26 20:31:48.978058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.499 [2024-11-26 20:31:48.978159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.499 [2024-11-26 20:31:48.978179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 20:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.499 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:55.761 /dev/nbd0 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.761 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.761 1+0 records in 00:18:55.761 1+0 records out 00:18:55.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481217 s, 8.5 MB/s 00:18:55.762 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.020 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:56.281 /dev/nbd1 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:56.281 1+0 records in 00:18:56.281 1+0 records out 00:18:56.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544065 s, 7.5 MB/s 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.281 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.540 20:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.540 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:56.798 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:56.798 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:56.798 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:56.798 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.798 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82024 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82024 ']' 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82024 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.799 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82024 00:18:57.057 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.057 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.057 killing process with pid 82024 00:18:57.057 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82024' 00:18:57.057 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82024 00:18:57.057 Received shutdown signal, test time was about 60.000000 seconds 00:18:57.057 00:18:57.057 Latency(us) 00:18:57.057 [2024-11-26T20:31:50.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.057 [2024-11-26T20:31:50.612Z] =================================================================================================================== 00:18:57.057 [2024-11-26T20:31:50.612Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.057 [2024-11-26 20:31:50.375639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.057 20:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82024 00:18:57.317 [2024-11-26 20:31:50.843621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:58.717 00:18:58.717 real 0m15.984s 00:18:58.717 user 0m19.725s 00:18:58.717 sys 0m2.086s 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.717 ************************************ 00:18:58.717 END TEST raid5f_rebuild_test 00:18:58.717 ************************************ 00:18:58.717 20:31:52 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:58.717 20:31:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:58.717 20:31:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.717 20:31:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.717 ************************************ 00:18:58.717 START TEST raid5f_rebuild_test_sb 00:18:58.717 ************************************ 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82472 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82472 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82472 ']' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.717 20:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.977 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:58.977 Zero copy mechanism will not be used. 00:18:58.977 [2024-11-26 20:31:52.295418] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:18:58.977 [2024-11-26 20:31:52.295553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82472 ] 00:18:58.977 [2024-11-26 20:31:52.475204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.237 [2024-11-26 20:31:52.607114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.496 [2024-11-26 20:31:52.829370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.496 [2024-11-26 20:31:52.829453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.756 BaseBdev1_malloc 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.756 [2024-11-26 20:31:53.226256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.756 [2024-11-26 20:31:53.226325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.756 [2024-11-26 20:31:53.226350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:59.756 [2024-11-26 20:31:53.226372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.756 [2024-11-26 20:31:53.228751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.756 [2024-11-26 20:31:53.228795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.756 BaseBdev1 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.756 BaseBdev2_malloc 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.756 [2024-11-26 20:31:53.285016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:59.756 [2024-11-26 20:31:53.285092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.756 [2024-11-26 20:31:53.285119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:59.756 [2024-11-26 20:31:53.285132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.756 [2024-11-26 20:31:53.287461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.756 [2024-11-26 20:31:53.287499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:59.756 BaseBdev2 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.756 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.015 BaseBdev3_malloc 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.016 [2024-11-26 20:31:53.361148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:00.016 [2024-11-26 20:31:53.361218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.016 [2024-11-26 20:31:53.361256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:00.016 [2024-11-26 20:31:53.361270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.016 [2024-11-26 20:31:53.363674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.016 [2024-11-26 20:31:53.363720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:00.016 BaseBdev3 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.016 spare_malloc 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.016 spare_delay 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.016 [2024-11-26 20:31:53.432356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.016 [2024-11-26 20:31:53.432460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.016 [2024-11-26 20:31:53.432482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:00.016 [2024-11-26 20:31:53.432494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.016 [2024-11-26 20:31:53.434917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.016 [2024-11-26 20:31:53.434964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.016 spare 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.016 [2024-11-26 20:31:53.444430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.016 [2024-11-26 20:31:53.446520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.016 [2024-11-26 20:31:53.446610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.016 [2024-11-26 20:31:53.446826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:00.016 [2024-11-26 20:31:53.446848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:00.016 [2024-11-26 20:31:53.447146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:00.016 [2024-11-26 20:31:53.453382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:00.016 [2024-11-26 20:31:53.453412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:00.016 [2024-11-26 20:31:53.453664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.016 "name": "raid_bdev1", 00:19:00.016 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:00.016 "strip_size_kb": 64, 00:19:00.016 "state": "online", 00:19:00.016 "raid_level": "raid5f", 00:19:00.016 "superblock": true, 00:19:00.016 "num_base_bdevs": 3, 00:19:00.016 "num_base_bdevs_discovered": 3, 00:19:00.016 "num_base_bdevs_operational": 3, 00:19:00.016 "base_bdevs_list": [ 00:19:00.016 { 00:19:00.016 "name": "BaseBdev1", 00:19:00.016 "uuid": "4389d01f-c2ee-56c2-ac05-c7220db3d949", 00:19:00.016 "is_configured": true, 00:19:00.016 "data_offset": 2048, 00:19:00.016 "data_size": 63488 00:19:00.016 }, 00:19:00.016 { 00:19:00.016 "name": "BaseBdev2", 00:19:00.016 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:00.016 "is_configured": true, 00:19:00.016 "data_offset": 2048, 00:19:00.016 "data_size": 63488 00:19:00.016 }, 00:19:00.016 { 00:19:00.016 "name": "BaseBdev3", 00:19:00.016 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:00.016 "is_configured": true, 00:19:00.016 "data_offset": 2048, 00:19:00.016 "data_size": 63488 00:19:00.016 } 00:19:00.016 ] 00:19:00.016 }' 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.016 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:00.585 [2024-11-26 20:31:53.892862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:00.585 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:00.586 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.586 20:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:00.845 [2024-11-26 20:31:54.200130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:00.845 /dev/nbd0 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.845 1+0 records in 00:19:00.845 1+0 records out 00:19:00.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426555 s, 9.6 MB/s 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:00.845 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:01.413 496+0 records in 00:19:01.413 496+0 records out 00:19:01.413 65011712 bytes (65 MB, 62 MiB) copied, 0.424851 s, 153 MB/s 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.413 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:01.673 [2024-11-26 20:31:54.973201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.673 [2024-11-26 20:31:54.994362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.673 20:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.673 "name": "raid_bdev1", 00:19:01.673 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:01.673 "strip_size_kb": 64, 00:19:01.673 "state": "online", 00:19:01.673 "raid_level": "raid5f", 00:19:01.673 "superblock": true, 00:19:01.673 "num_base_bdevs": 3, 00:19:01.673 "num_base_bdevs_discovered": 2, 00:19:01.673 "num_base_bdevs_operational": 2, 00:19:01.673 "base_bdevs_list": [ 00:19:01.673 { 00:19:01.673 "name": null, 00:19:01.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.673 "is_configured": false, 00:19:01.673 "data_offset": 0, 00:19:01.673 "data_size": 63488 00:19:01.673 }, 00:19:01.673 { 00:19:01.673 "name": "BaseBdev2", 00:19:01.673 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:01.673 "is_configured": true, 00:19:01.673 "data_offset": 2048, 00:19:01.673 "data_size": 63488 00:19:01.673 }, 00:19:01.673 { 00:19:01.673 "name": "BaseBdev3", 00:19:01.673 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:01.673 "is_configured": true, 00:19:01.673 "data_offset": 2048, 00:19:01.673 "data_size": 63488 00:19:01.673 } 00:19:01.673 ] 00:19:01.673 }' 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.673 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.933 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.933 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.933 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.193 [2024-11-26 20:31:55.493534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.193 [2024-11-26 20:31:55.513816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:02.193 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.193 20:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:02.193 [2024-11-26 20:31:55.523104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.135 "name": "raid_bdev1", 00:19:03.135 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:03.135 "strip_size_kb": 64, 00:19:03.135 "state": "online", 00:19:03.135 "raid_level": "raid5f", 00:19:03.135 "superblock": true, 00:19:03.135 "num_base_bdevs": 3, 00:19:03.135 "num_base_bdevs_discovered": 3, 00:19:03.135 "num_base_bdevs_operational": 3, 00:19:03.135 "process": { 00:19:03.135 "type": "rebuild", 00:19:03.135 "target": "spare", 00:19:03.135 "progress": { 00:19:03.135 "blocks": 18432, 00:19:03.135 "percent": 14 00:19:03.135 } 00:19:03.135 }, 00:19:03.135 "base_bdevs_list": [ 00:19:03.135 { 00:19:03.135 "name": "spare", 00:19:03.135 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:03.135 "is_configured": true, 00:19:03.135 "data_offset": 2048, 00:19:03.135 "data_size": 63488 00:19:03.135 }, 00:19:03.135 { 00:19:03.135 "name": "BaseBdev2", 00:19:03.135 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:03.135 "is_configured": true, 00:19:03.135 "data_offset": 2048, 00:19:03.135 "data_size": 63488 00:19:03.135 }, 00:19:03.135 { 00:19:03.135 "name": "BaseBdev3", 00:19:03.135 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:03.135 "is_configured": true, 00:19:03.135 "data_offset": 2048, 00:19:03.135 "data_size": 63488 00:19:03.135 } 00:19:03.135 ] 00:19:03.135 }' 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.135 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.135 [2024-11-26 20:31:56.671593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.395 [2024-11-26 20:31:56.735740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.395 [2024-11-26 20:31:56.735829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.395 [2024-11-26 20:31:56.735870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.395 [2024-11-26 20:31:56.735880] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.395 "name": "raid_bdev1", 00:19:03.395 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:03.395 "strip_size_kb": 64, 00:19:03.395 "state": "online", 00:19:03.395 "raid_level": "raid5f", 00:19:03.395 "superblock": true, 00:19:03.395 "num_base_bdevs": 3, 00:19:03.395 "num_base_bdevs_discovered": 2, 00:19:03.395 "num_base_bdevs_operational": 2, 00:19:03.395 "base_bdevs_list": [ 00:19:03.395 { 00:19:03.395 "name": null, 00:19:03.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.395 "is_configured": false, 00:19:03.395 "data_offset": 0, 00:19:03.395 "data_size": 63488 00:19:03.395 }, 00:19:03.395 { 00:19:03.395 "name": "BaseBdev2", 00:19:03.395 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:03.395 "is_configured": true, 00:19:03.395 "data_offset": 2048, 00:19:03.395 "data_size": 63488 00:19:03.395 }, 00:19:03.395 { 00:19:03.395 "name": "BaseBdev3", 00:19:03.395 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:03.395 "is_configured": true, 00:19:03.395 "data_offset": 2048, 00:19:03.395 "data_size": 63488 00:19:03.395 } 00:19:03.395 ] 00:19:03.395 }' 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.395 20:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.964 "name": "raid_bdev1", 00:19:03.964 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:03.964 "strip_size_kb": 64, 00:19:03.964 "state": "online", 00:19:03.964 "raid_level": "raid5f", 00:19:03.964 "superblock": true, 00:19:03.964 "num_base_bdevs": 3, 00:19:03.964 "num_base_bdevs_discovered": 2, 00:19:03.964 "num_base_bdevs_operational": 2, 00:19:03.964 "base_bdevs_list": [ 00:19:03.964 { 00:19:03.964 "name": null, 00:19:03.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.964 "is_configured": false, 00:19:03.964 "data_offset": 0, 00:19:03.964 "data_size": 63488 00:19:03.964 }, 00:19:03.964 { 00:19:03.964 "name": "BaseBdev2", 00:19:03.964 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:03.964 "is_configured": true, 00:19:03.964 "data_offset": 2048, 00:19:03.964 "data_size": 63488 00:19:03.964 }, 00:19:03.964 { 00:19:03.964 "name": "BaseBdev3", 00:19:03.964 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:03.964 "is_configured": true, 00:19:03.964 "data_offset": 2048, 00:19:03.964 "data_size": 63488 00:19:03.964 } 00:19:03.964 ] 00:19:03.964 }' 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.964 [2024-11-26 20:31:57.377490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.964 [2024-11-26 20:31:57.397397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.964 20:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:03.964 [2024-11-26 20:31:57.406948] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.902 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.902 "name": "raid_bdev1", 00:19:04.902 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:04.902 "strip_size_kb": 64, 00:19:04.902 "state": "online", 00:19:04.902 "raid_level": "raid5f", 00:19:04.902 "superblock": true, 00:19:04.902 "num_base_bdevs": 3, 00:19:04.902 "num_base_bdevs_discovered": 3, 00:19:04.902 "num_base_bdevs_operational": 3, 00:19:04.902 "process": { 00:19:04.902 "type": "rebuild", 00:19:04.902 "target": "spare", 00:19:04.902 "progress": { 00:19:04.902 "blocks": 18432, 00:19:04.902 "percent": 14 00:19:04.902 } 00:19:04.902 }, 00:19:04.902 "base_bdevs_list": [ 00:19:04.902 { 00:19:04.902 "name": "spare", 00:19:04.902 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:04.902 "is_configured": true, 00:19:04.902 "data_offset": 2048, 00:19:04.902 "data_size": 63488 00:19:04.902 }, 00:19:04.902 { 00:19:04.902 "name": "BaseBdev2", 00:19:04.902 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:04.902 "is_configured": true, 00:19:04.902 "data_offset": 2048, 00:19:04.902 "data_size": 63488 00:19:04.902 }, 00:19:04.902 { 00:19:04.902 "name": "BaseBdev3", 00:19:04.902 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:04.902 "is_configured": true, 00:19:04.902 "data_offset": 2048, 00:19:04.902 "data_size": 63488 00:19:04.902 } 00:19:04.902 ] 00:19:04.902 }' 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:05.161 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=591 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.161 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.161 "name": "raid_bdev1", 00:19:05.161 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:05.161 "strip_size_kb": 64, 00:19:05.161 "state": "online", 00:19:05.161 "raid_level": "raid5f", 00:19:05.162 "superblock": true, 00:19:05.162 "num_base_bdevs": 3, 00:19:05.162 "num_base_bdevs_discovered": 3, 00:19:05.162 "num_base_bdevs_operational": 3, 00:19:05.162 "process": { 00:19:05.162 "type": "rebuild", 00:19:05.162 "target": "spare", 00:19:05.162 "progress": { 00:19:05.162 "blocks": 22528, 00:19:05.162 "percent": 17 00:19:05.162 } 00:19:05.162 }, 00:19:05.162 "base_bdevs_list": [ 00:19:05.162 { 00:19:05.162 "name": "spare", 00:19:05.162 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:05.162 "is_configured": true, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 }, 00:19:05.162 { 00:19:05.162 "name": "BaseBdev2", 00:19:05.162 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:05.162 "is_configured": true, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 }, 00:19:05.162 { 00:19:05.162 "name": "BaseBdev3", 00:19:05.162 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:05.162 "is_configured": true, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 } 00:19:05.162 ] 00:19:05.162 }' 00:19:05.162 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.162 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.162 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.162 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.162 20:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.536 "name": "raid_bdev1", 00:19:06.536 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:06.536 "strip_size_kb": 64, 00:19:06.536 "state": "online", 00:19:06.536 "raid_level": "raid5f", 00:19:06.536 "superblock": true, 00:19:06.536 "num_base_bdevs": 3, 00:19:06.536 "num_base_bdevs_discovered": 3, 00:19:06.536 "num_base_bdevs_operational": 3, 00:19:06.536 "process": { 00:19:06.536 "type": "rebuild", 00:19:06.536 "target": "spare", 00:19:06.536 "progress": { 00:19:06.536 "blocks": 45056, 00:19:06.536 "percent": 35 00:19:06.536 } 00:19:06.536 }, 00:19:06.536 "base_bdevs_list": [ 00:19:06.536 { 00:19:06.536 "name": "spare", 00:19:06.536 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:06.536 "is_configured": true, 00:19:06.536 "data_offset": 2048, 00:19:06.536 "data_size": 63488 00:19:06.536 }, 00:19:06.536 { 00:19:06.536 "name": "BaseBdev2", 00:19:06.536 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:06.536 "is_configured": true, 00:19:06.536 "data_offset": 2048, 00:19:06.536 "data_size": 63488 00:19:06.536 }, 00:19:06.536 { 00:19:06.536 "name": "BaseBdev3", 00:19:06.536 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:06.536 "is_configured": true, 00:19:06.536 "data_offset": 2048, 00:19:06.536 "data_size": 63488 00:19:06.536 } 00:19:06.536 ] 00:19:06.536 }' 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.536 20:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.473 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.473 "name": "raid_bdev1", 00:19:07.473 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:07.473 "strip_size_kb": 64, 00:19:07.473 "state": "online", 00:19:07.473 "raid_level": "raid5f", 00:19:07.473 "superblock": true, 00:19:07.473 "num_base_bdevs": 3, 00:19:07.473 "num_base_bdevs_discovered": 3, 00:19:07.473 "num_base_bdevs_operational": 3, 00:19:07.473 "process": { 00:19:07.473 "type": "rebuild", 00:19:07.473 "target": "spare", 00:19:07.473 "progress": { 00:19:07.473 "blocks": 69632, 00:19:07.473 "percent": 54 00:19:07.474 } 00:19:07.474 }, 00:19:07.474 "base_bdevs_list": [ 00:19:07.474 { 00:19:07.474 "name": "spare", 00:19:07.474 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:07.474 "is_configured": true, 00:19:07.474 "data_offset": 2048, 00:19:07.474 "data_size": 63488 00:19:07.474 }, 00:19:07.474 { 00:19:07.474 "name": "BaseBdev2", 00:19:07.474 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:07.474 "is_configured": true, 00:19:07.474 "data_offset": 2048, 00:19:07.474 "data_size": 63488 00:19:07.474 }, 00:19:07.474 { 00:19:07.474 "name": "BaseBdev3", 00:19:07.474 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:07.474 "is_configured": true, 00:19:07.474 "data_offset": 2048, 00:19:07.474 "data_size": 63488 00:19:07.474 } 00:19:07.474 ] 00:19:07.474 }' 00:19:07.474 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.474 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.474 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.474 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.474 20:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:08.852 20:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.852 20:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.852 20:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.852 "name": "raid_bdev1", 00:19:08.852 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:08.852 "strip_size_kb": 64, 00:19:08.852 "state": "online", 00:19:08.852 "raid_level": "raid5f", 00:19:08.852 "superblock": true, 00:19:08.852 "num_base_bdevs": 3, 00:19:08.852 "num_base_bdevs_discovered": 3, 00:19:08.852 "num_base_bdevs_operational": 3, 00:19:08.852 "process": { 00:19:08.852 "type": "rebuild", 00:19:08.852 "target": "spare", 00:19:08.852 "progress": { 00:19:08.852 "blocks": 92160, 00:19:08.852 "percent": 72 00:19:08.852 } 00:19:08.852 }, 00:19:08.852 "base_bdevs_list": [ 00:19:08.852 { 00:19:08.852 "name": "spare", 00:19:08.852 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:08.852 "is_configured": true, 00:19:08.852 "data_offset": 2048, 00:19:08.852 "data_size": 63488 00:19:08.852 }, 00:19:08.852 { 00:19:08.852 "name": "BaseBdev2", 00:19:08.852 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:08.852 "is_configured": true, 00:19:08.852 "data_offset": 2048, 00:19:08.852 "data_size": 63488 00:19:08.852 }, 00:19:08.852 { 00:19:08.852 "name": "BaseBdev3", 00:19:08.852 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:08.852 "is_configured": true, 00:19:08.852 "data_offset": 2048, 00:19:08.852 "data_size": 63488 00:19:08.852 } 00:19:08.852 ] 00:19:08.852 }' 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.852 20:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.788 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.788 "name": "raid_bdev1", 00:19:09.788 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:09.788 "strip_size_kb": 64, 00:19:09.788 "state": "online", 00:19:09.788 "raid_level": "raid5f", 00:19:09.788 "superblock": true, 00:19:09.788 "num_base_bdevs": 3, 00:19:09.788 "num_base_bdevs_discovered": 3, 00:19:09.788 "num_base_bdevs_operational": 3, 00:19:09.788 "process": { 00:19:09.788 "type": "rebuild", 00:19:09.788 "target": "spare", 00:19:09.788 "progress": { 00:19:09.788 "blocks": 116736, 00:19:09.788 "percent": 91 00:19:09.788 } 00:19:09.788 }, 00:19:09.788 "base_bdevs_list": [ 00:19:09.788 { 00:19:09.788 "name": "spare", 00:19:09.788 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:09.788 "is_configured": true, 00:19:09.789 "data_offset": 2048, 00:19:09.789 "data_size": 63488 00:19:09.789 }, 00:19:09.789 { 00:19:09.789 "name": "BaseBdev2", 00:19:09.789 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:09.789 "is_configured": true, 00:19:09.789 "data_offset": 2048, 00:19:09.789 "data_size": 63488 00:19:09.789 }, 00:19:09.789 { 00:19:09.789 "name": "BaseBdev3", 00:19:09.789 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:09.789 "is_configured": true, 00:19:09.789 "data_offset": 2048, 00:19:09.789 "data_size": 63488 00:19:09.789 } 00:19:09.789 ] 00:19:09.789 }' 00:19:09.789 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.789 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.789 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.789 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.789 20:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.357 [2024-11-26 20:32:03.669471] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:10.357 [2024-11-26 20:32:03.669684] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:10.357 [2024-11-26 20:32:03.669893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.925 "name": "raid_bdev1", 00:19:10.925 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:10.925 "strip_size_kb": 64, 00:19:10.925 "state": "online", 00:19:10.925 "raid_level": "raid5f", 00:19:10.925 "superblock": true, 00:19:10.925 "num_base_bdevs": 3, 00:19:10.925 "num_base_bdevs_discovered": 3, 00:19:10.925 "num_base_bdevs_operational": 3, 00:19:10.925 "base_bdevs_list": [ 00:19:10.925 { 00:19:10.925 "name": "spare", 00:19:10.925 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:10.925 "is_configured": true, 00:19:10.925 "data_offset": 2048, 00:19:10.925 "data_size": 63488 00:19:10.925 }, 00:19:10.925 { 00:19:10.925 "name": "BaseBdev2", 00:19:10.925 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:10.925 "is_configured": true, 00:19:10.925 "data_offset": 2048, 00:19:10.925 "data_size": 63488 00:19:10.925 }, 00:19:10.925 { 00:19:10.925 "name": "BaseBdev3", 00:19:10.925 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:10.925 "is_configured": true, 00:19:10.925 "data_offset": 2048, 00:19:10.925 "data_size": 63488 00:19:10.925 } 00:19:10.925 ] 00:19:10.925 }' 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.925 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.183 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.183 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.183 "name": "raid_bdev1", 00:19:11.183 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:11.183 "strip_size_kb": 64, 00:19:11.183 "state": "online", 00:19:11.183 "raid_level": "raid5f", 00:19:11.183 "superblock": true, 00:19:11.183 "num_base_bdevs": 3, 00:19:11.183 "num_base_bdevs_discovered": 3, 00:19:11.183 "num_base_bdevs_operational": 3, 00:19:11.183 "base_bdevs_list": [ 00:19:11.183 { 00:19:11.183 "name": "spare", 00:19:11.183 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:11.184 "is_configured": true, 00:19:11.184 "data_offset": 2048, 00:19:11.184 "data_size": 63488 00:19:11.184 }, 00:19:11.184 { 00:19:11.184 "name": "BaseBdev2", 00:19:11.184 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:11.184 "is_configured": true, 00:19:11.184 "data_offset": 2048, 00:19:11.184 "data_size": 63488 00:19:11.184 }, 00:19:11.184 { 00:19:11.184 "name": "BaseBdev3", 00:19:11.184 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:11.184 "is_configured": true, 00:19:11.184 "data_offset": 2048, 00:19:11.184 "data_size": 63488 00:19:11.184 } 00:19:11.184 ] 00:19:11.184 }' 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.184 "name": "raid_bdev1", 00:19:11.184 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:11.184 "strip_size_kb": 64, 00:19:11.184 "state": "online", 00:19:11.184 "raid_level": "raid5f", 00:19:11.184 "superblock": true, 00:19:11.184 "num_base_bdevs": 3, 00:19:11.184 "num_base_bdevs_discovered": 3, 00:19:11.184 "num_base_bdevs_operational": 3, 00:19:11.184 "base_bdevs_list": [ 00:19:11.184 { 00:19:11.184 "name": "spare", 00:19:11.184 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:11.184 "is_configured": true, 00:19:11.184 "data_offset": 2048, 00:19:11.184 "data_size": 63488 00:19:11.184 }, 00:19:11.184 { 00:19:11.184 "name": "BaseBdev2", 00:19:11.184 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:11.184 "is_configured": true, 00:19:11.184 "data_offset": 2048, 00:19:11.184 "data_size": 63488 00:19:11.184 }, 00:19:11.184 { 00:19:11.184 "name": "BaseBdev3", 00:19:11.184 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:11.184 "is_configured": true, 00:19:11.184 "data_offset": 2048, 00:19:11.184 "data_size": 63488 00:19:11.184 } 00:19:11.184 ] 00:19:11.184 }' 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.184 20:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.752 [2024-11-26 20:32:05.060580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:11.752 [2024-11-26 20:32:05.060664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.752 [2024-11-26 20:32:05.060811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.752 [2024-11-26 20:32:05.060964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.752 [2024-11-26 20:32:05.061041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:11.752 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:12.011 /dev/nbd0 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:12.011 1+0 records in 00:19:12.011 1+0 records out 00:19:12.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335622 s, 12.2 MB/s 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:12.011 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:12.270 /dev/nbd1 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:12.270 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:12.271 1+0 records in 00:19:12.271 1+0 records out 00:19:12.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030986 s, 13.2 MB/s 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:12.271 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.530 20:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:12.791 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:12.792 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.792 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.792 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.052 [2024-11-26 20:32:06.417095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:13.052 [2024-11-26 20:32:06.417293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.052 [2024-11-26 20:32:06.417389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:13.052 [2024-11-26 20:32:06.417435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.052 [2024-11-26 20:32:06.420389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.052 [2024-11-26 20:32:06.420505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:13.052 [2024-11-26 20:32:06.420668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:13.052 [2024-11-26 20:32:06.420778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.052 [2024-11-26 20:32:06.421026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.052 [2024-11-26 20:32:06.421283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.052 spare 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.052 [2024-11-26 20:32:06.521294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:13.052 [2024-11-26 20:32:06.521464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:13.052 [2024-11-26 20:32:06.521922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:13.052 [2024-11-26 20:32:06.528908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:13.052 [2024-11-26 20:32:06.529016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:13.052 [2024-11-26 20:32:06.529377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.052 "name": "raid_bdev1", 00:19:13.052 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:13.052 "strip_size_kb": 64, 00:19:13.052 "state": "online", 00:19:13.052 "raid_level": "raid5f", 00:19:13.052 "superblock": true, 00:19:13.052 "num_base_bdevs": 3, 00:19:13.052 "num_base_bdevs_discovered": 3, 00:19:13.052 "num_base_bdevs_operational": 3, 00:19:13.052 "base_bdevs_list": [ 00:19:13.052 { 00:19:13.052 "name": "spare", 00:19:13.052 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:13.052 "is_configured": true, 00:19:13.052 "data_offset": 2048, 00:19:13.052 "data_size": 63488 00:19:13.052 }, 00:19:13.052 { 00:19:13.052 "name": "BaseBdev2", 00:19:13.052 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:13.052 "is_configured": true, 00:19:13.052 "data_offset": 2048, 00:19:13.052 "data_size": 63488 00:19:13.052 }, 00:19:13.052 { 00:19:13.052 "name": "BaseBdev3", 00:19:13.052 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:13.052 "is_configured": true, 00:19:13.052 "data_offset": 2048, 00:19:13.052 "data_size": 63488 00:19:13.052 } 00:19:13.052 ] 00:19:13.052 }' 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.052 20:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.621 "name": "raid_bdev1", 00:19:13.621 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:13.621 "strip_size_kb": 64, 00:19:13.621 "state": "online", 00:19:13.621 "raid_level": "raid5f", 00:19:13.621 "superblock": true, 00:19:13.621 "num_base_bdevs": 3, 00:19:13.621 "num_base_bdevs_discovered": 3, 00:19:13.621 "num_base_bdevs_operational": 3, 00:19:13.621 "base_bdevs_list": [ 00:19:13.621 { 00:19:13.621 "name": "spare", 00:19:13.621 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:13.621 "is_configured": true, 00:19:13.621 "data_offset": 2048, 00:19:13.621 "data_size": 63488 00:19:13.621 }, 00:19:13.621 { 00:19:13.621 "name": "BaseBdev2", 00:19:13.621 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:13.621 "is_configured": true, 00:19:13.621 "data_offset": 2048, 00:19:13.621 "data_size": 63488 00:19:13.621 }, 00:19:13.621 { 00:19:13.621 "name": "BaseBdev3", 00:19:13.621 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:13.621 "is_configured": true, 00:19:13.621 "data_offset": 2048, 00:19:13.621 "data_size": 63488 00:19:13.621 } 00:19:13.621 ] 00:19:13.621 }' 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:13.621 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.881 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.881 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.881 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.882 [2024-11-26 20:32:07.200120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.882 "name": "raid_bdev1", 00:19:13.882 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:13.882 "strip_size_kb": 64, 00:19:13.882 "state": "online", 00:19:13.882 "raid_level": "raid5f", 00:19:13.882 "superblock": true, 00:19:13.882 "num_base_bdevs": 3, 00:19:13.882 "num_base_bdevs_discovered": 2, 00:19:13.882 "num_base_bdevs_operational": 2, 00:19:13.882 "base_bdevs_list": [ 00:19:13.882 { 00:19:13.882 "name": null, 00:19:13.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.882 "is_configured": false, 00:19:13.882 "data_offset": 0, 00:19:13.882 "data_size": 63488 00:19:13.882 }, 00:19:13.882 { 00:19:13.882 "name": "BaseBdev2", 00:19:13.882 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:13.882 "is_configured": true, 00:19:13.882 "data_offset": 2048, 00:19:13.882 "data_size": 63488 00:19:13.882 }, 00:19:13.882 { 00:19:13.882 "name": "BaseBdev3", 00:19:13.882 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:13.882 "is_configured": true, 00:19:13.882 "data_offset": 2048, 00:19:13.882 "data_size": 63488 00:19:13.882 } 00:19:13.882 ] 00:19:13.882 }' 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.882 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.141 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.141 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.141 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.141 [2024-11-26 20:32:07.691398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.141 [2024-11-26 20:32:07.691669] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:14.141 [2024-11-26 20:32:07.691746] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:14.141 [2024-11-26 20:32:07.691839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.399 [2024-11-26 20:32:07.711452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:14.399 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.399 20:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:14.399 [2024-11-26 20:32:07.720698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.337 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.337 "name": "raid_bdev1", 00:19:15.337 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:15.337 "strip_size_kb": 64, 00:19:15.337 "state": "online", 00:19:15.337 "raid_level": "raid5f", 00:19:15.337 "superblock": true, 00:19:15.337 "num_base_bdevs": 3, 00:19:15.337 "num_base_bdevs_discovered": 3, 00:19:15.337 "num_base_bdevs_operational": 3, 00:19:15.337 "process": { 00:19:15.337 "type": "rebuild", 00:19:15.337 "target": "spare", 00:19:15.338 "progress": { 00:19:15.338 "blocks": 18432, 00:19:15.338 "percent": 14 00:19:15.338 } 00:19:15.338 }, 00:19:15.338 "base_bdevs_list": [ 00:19:15.338 { 00:19:15.338 "name": "spare", 00:19:15.338 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:15.338 "is_configured": true, 00:19:15.338 "data_offset": 2048, 00:19:15.338 "data_size": 63488 00:19:15.338 }, 00:19:15.338 { 00:19:15.338 "name": "BaseBdev2", 00:19:15.338 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:15.338 "is_configured": true, 00:19:15.338 "data_offset": 2048, 00:19:15.338 "data_size": 63488 00:19:15.338 }, 00:19:15.338 { 00:19:15.338 "name": "BaseBdev3", 00:19:15.338 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:15.338 "is_configured": true, 00:19:15.338 "data_offset": 2048, 00:19:15.338 "data_size": 63488 00:19:15.338 } 00:19:15.338 ] 00:19:15.338 }' 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.338 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.338 [2024-11-26 20:32:08.864028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.596 [2024-11-26 20:32:08.932102] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:15.596 [2024-11-26 20:32:08.932284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.596 [2024-11-26 20:32:08.932334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.596 [2024-11-26 20:32:08.932381] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.596 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.597 20:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.597 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.597 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.597 "name": "raid_bdev1", 00:19:15.597 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:15.597 "strip_size_kb": 64, 00:19:15.597 "state": "online", 00:19:15.597 "raid_level": "raid5f", 00:19:15.597 "superblock": true, 00:19:15.597 "num_base_bdevs": 3, 00:19:15.597 "num_base_bdevs_discovered": 2, 00:19:15.597 "num_base_bdevs_operational": 2, 00:19:15.597 "base_bdevs_list": [ 00:19:15.597 { 00:19:15.597 "name": null, 00:19:15.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.597 "is_configured": false, 00:19:15.597 "data_offset": 0, 00:19:15.597 "data_size": 63488 00:19:15.597 }, 00:19:15.597 { 00:19:15.597 "name": "BaseBdev2", 00:19:15.597 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:15.597 "is_configured": true, 00:19:15.597 "data_offset": 2048, 00:19:15.597 "data_size": 63488 00:19:15.597 }, 00:19:15.597 { 00:19:15.597 "name": "BaseBdev3", 00:19:15.597 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:15.597 "is_configured": true, 00:19:15.597 "data_offset": 2048, 00:19:15.597 "data_size": 63488 00:19:15.597 } 00:19:15.597 ] 00:19:15.597 }' 00:19:15.597 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.597 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.171 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:16.171 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.171 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.171 [2024-11-26 20:32:09.434701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.171 [2024-11-26 20:32:09.434849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.171 [2024-11-26 20:32:09.434899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:16.171 [2024-11-26 20:32:09.434947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.171 [2024-11-26 20:32:09.435596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.171 [2024-11-26 20:32:09.435628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.171 [2024-11-26 20:32:09.435743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:16.171 [2024-11-26 20:32:09.435765] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:16.171 [2024-11-26 20:32:09.435777] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:16.171 [2024-11-26 20:32:09.435804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.171 spare 00:19:16.171 [2024-11-26 20:32:09.454437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:19:16.171 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.171 20:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:16.171 [2024-11-26 20:32:09.463356] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.134 "name": "raid_bdev1", 00:19:17.134 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:17.134 "strip_size_kb": 64, 00:19:17.134 "state": "online", 00:19:17.134 "raid_level": "raid5f", 00:19:17.134 "superblock": true, 00:19:17.134 "num_base_bdevs": 3, 00:19:17.134 "num_base_bdevs_discovered": 3, 00:19:17.134 "num_base_bdevs_operational": 3, 00:19:17.134 "process": { 00:19:17.134 "type": "rebuild", 00:19:17.134 "target": "spare", 00:19:17.134 "progress": { 00:19:17.134 "blocks": 20480, 00:19:17.134 "percent": 16 00:19:17.134 } 00:19:17.134 }, 00:19:17.134 "base_bdevs_list": [ 00:19:17.134 { 00:19:17.134 "name": "spare", 00:19:17.134 "uuid": "7e03b303-deed-5758-9c49-202b021a8f19", 00:19:17.134 "is_configured": true, 00:19:17.134 "data_offset": 2048, 00:19:17.134 "data_size": 63488 00:19:17.134 }, 00:19:17.134 { 00:19:17.134 "name": "BaseBdev2", 00:19:17.134 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:17.134 "is_configured": true, 00:19:17.134 "data_offset": 2048, 00:19:17.134 "data_size": 63488 00:19:17.134 }, 00:19:17.134 { 00:19:17.134 "name": "BaseBdev3", 00:19:17.134 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:17.134 "is_configured": true, 00:19:17.134 "data_offset": 2048, 00:19:17.134 "data_size": 63488 00:19:17.134 } 00:19:17.134 ] 00:19:17.134 }' 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.134 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.134 [2024-11-26 20:32:10.618737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.404 [2024-11-26 20:32:10.674853] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.404 [2024-11-26 20:32:10.674979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.404 [2024-11-26 20:32:10.675082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.404 [2024-11-26 20:32:10.675118] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.404 "name": "raid_bdev1", 00:19:17.404 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:17.404 "strip_size_kb": 64, 00:19:17.404 "state": "online", 00:19:17.404 "raid_level": "raid5f", 00:19:17.404 "superblock": true, 00:19:17.404 "num_base_bdevs": 3, 00:19:17.404 "num_base_bdevs_discovered": 2, 00:19:17.404 "num_base_bdevs_operational": 2, 00:19:17.404 "base_bdevs_list": [ 00:19:17.404 { 00:19:17.404 "name": null, 00:19:17.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.404 "is_configured": false, 00:19:17.404 "data_offset": 0, 00:19:17.404 "data_size": 63488 00:19:17.404 }, 00:19:17.404 { 00:19:17.404 "name": "BaseBdev2", 00:19:17.404 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:17.404 "is_configured": true, 00:19:17.404 "data_offset": 2048, 00:19:17.404 "data_size": 63488 00:19:17.404 }, 00:19:17.404 { 00:19:17.404 "name": "BaseBdev3", 00:19:17.404 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:17.404 "is_configured": true, 00:19:17.404 "data_offset": 2048, 00:19:17.404 "data_size": 63488 00:19:17.404 } 00:19:17.404 ] 00:19:17.404 }' 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.404 20:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.663 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.922 "name": "raid_bdev1", 00:19:17.922 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:17.922 "strip_size_kb": 64, 00:19:17.922 "state": "online", 00:19:17.922 "raid_level": "raid5f", 00:19:17.922 "superblock": true, 00:19:17.922 "num_base_bdevs": 3, 00:19:17.922 "num_base_bdevs_discovered": 2, 00:19:17.922 "num_base_bdevs_operational": 2, 00:19:17.922 "base_bdevs_list": [ 00:19:17.922 { 00:19:17.922 "name": null, 00:19:17.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.922 "is_configured": false, 00:19:17.922 "data_offset": 0, 00:19:17.922 "data_size": 63488 00:19:17.922 }, 00:19:17.922 { 00:19:17.922 "name": "BaseBdev2", 00:19:17.922 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:17.922 "is_configured": true, 00:19:17.922 "data_offset": 2048, 00:19:17.922 "data_size": 63488 00:19:17.922 }, 00:19:17.922 { 00:19:17.922 "name": "BaseBdev3", 00:19:17.922 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:17.922 "is_configured": true, 00:19:17.922 "data_offset": 2048, 00:19:17.922 "data_size": 63488 00:19:17.922 } 00:19:17.922 ] 00:19:17.922 }' 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.922 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:17.923 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.923 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.923 [2024-11-26 20:32:11.346694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:17.923 [2024-11-26 20:32:11.346809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.923 [2024-11-26 20:32:11.346857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:17.923 [2024-11-26 20:32:11.346869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.923 [2024-11-26 20:32:11.347443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.923 [2024-11-26 20:32:11.347465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:17.923 [2024-11-26 20:32:11.347560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:17.923 [2024-11-26 20:32:11.347581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.923 [2024-11-26 20:32:11.347605] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:17.923 [2024-11-26 20:32:11.347617] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:17.923 BaseBdev1 00:19:17.923 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.923 20:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.860 "name": "raid_bdev1", 00:19:18.860 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:18.860 "strip_size_kb": 64, 00:19:18.860 "state": "online", 00:19:18.860 "raid_level": "raid5f", 00:19:18.860 "superblock": true, 00:19:18.860 "num_base_bdevs": 3, 00:19:18.860 "num_base_bdevs_discovered": 2, 00:19:18.860 "num_base_bdevs_operational": 2, 00:19:18.860 "base_bdevs_list": [ 00:19:18.860 { 00:19:18.860 "name": null, 00:19:18.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.860 "is_configured": false, 00:19:18.860 "data_offset": 0, 00:19:18.860 "data_size": 63488 00:19:18.860 }, 00:19:18.860 { 00:19:18.860 "name": "BaseBdev2", 00:19:18.860 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:18.860 "is_configured": true, 00:19:18.860 "data_offset": 2048, 00:19:18.860 "data_size": 63488 00:19:18.860 }, 00:19:18.860 { 00:19:18.860 "name": "BaseBdev3", 00:19:18.860 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:18.860 "is_configured": true, 00:19:18.860 "data_offset": 2048, 00:19:18.860 "data_size": 63488 00:19:18.860 } 00:19:18.860 ] 00:19:18.860 }' 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.860 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.428 "name": "raid_bdev1", 00:19:19.428 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:19.428 "strip_size_kb": 64, 00:19:19.428 "state": "online", 00:19:19.428 "raid_level": "raid5f", 00:19:19.428 "superblock": true, 00:19:19.428 "num_base_bdevs": 3, 00:19:19.428 "num_base_bdevs_discovered": 2, 00:19:19.428 "num_base_bdevs_operational": 2, 00:19:19.428 "base_bdevs_list": [ 00:19:19.428 { 00:19:19.428 "name": null, 00:19:19.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.428 "is_configured": false, 00:19:19.428 "data_offset": 0, 00:19:19.428 "data_size": 63488 00:19:19.428 }, 00:19:19.428 { 00:19:19.428 "name": "BaseBdev2", 00:19:19.428 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:19.428 "is_configured": true, 00:19:19.428 "data_offset": 2048, 00:19:19.428 "data_size": 63488 00:19:19.428 }, 00:19:19.428 { 00:19:19.428 "name": "BaseBdev3", 00:19:19.428 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:19.428 "is_configured": true, 00:19:19.428 "data_offset": 2048, 00:19:19.428 "data_size": 63488 00:19:19.428 } 00:19:19.428 ] 00:19:19.428 }' 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.428 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.687 20:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.687 [2024-11-26 20:32:13.000208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.687 [2024-11-26 20:32:13.000487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.687 [2024-11-26 20:32:13.000514] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:19.687 request: 00:19:19.687 { 00:19:19.687 "base_bdev": "BaseBdev1", 00:19:19.687 "raid_bdev": "raid_bdev1", 00:19:19.687 "method": "bdev_raid_add_base_bdev", 00:19:19.687 "req_id": 1 00:19:19.687 } 00:19:19.687 Got JSON-RPC error response 00:19:19.687 response: 00:19:19.687 { 00:19:19.687 "code": -22, 00:19:19.688 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:19.688 } 00:19:19.688 20:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:19.688 20:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:19.688 20:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.688 20:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.688 20:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.688 20:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.625 "name": "raid_bdev1", 00:19:20.625 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:20.625 "strip_size_kb": 64, 00:19:20.625 "state": "online", 00:19:20.625 "raid_level": "raid5f", 00:19:20.625 "superblock": true, 00:19:20.625 "num_base_bdevs": 3, 00:19:20.625 "num_base_bdevs_discovered": 2, 00:19:20.625 "num_base_bdevs_operational": 2, 00:19:20.625 "base_bdevs_list": [ 00:19:20.625 { 00:19:20.625 "name": null, 00:19:20.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.625 "is_configured": false, 00:19:20.625 "data_offset": 0, 00:19:20.625 "data_size": 63488 00:19:20.625 }, 00:19:20.625 { 00:19:20.625 "name": "BaseBdev2", 00:19:20.625 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:20.625 "is_configured": true, 00:19:20.625 "data_offset": 2048, 00:19:20.625 "data_size": 63488 00:19:20.625 }, 00:19:20.625 { 00:19:20.625 "name": "BaseBdev3", 00:19:20.625 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:20.625 "is_configured": true, 00:19:20.625 "data_offset": 2048, 00:19:20.625 "data_size": 63488 00:19:20.625 } 00:19:20.625 ] 00:19:20.625 }' 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.625 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.194 "name": "raid_bdev1", 00:19:21.194 "uuid": "932e8ef6-b77e-48e6-b2bd-fcaf3da50a13", 00:19:21.194 "strip_size_kb": 64, 00:19:21.194 "state": "online", 00:19:21.194 "raid_level": "raid5f", 00:19:21.194 "superblock": true, 00:19:21.194 "num_base_bdevs": 3, 00:19:21.194 "num_base_bdevs_discovered": 2, 00:19:21.194 "num_base_bdevs_operational": 2, 00:19:21.194 "base_bdevs_list": [ 00:19:21.194 { 00:19:21.194 "name": null, 00:19:21.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.194 "is_configured": false, 00:19:21.194 "data_offset": 0, 00:19:21.194 "data_size": 63488 00:19:21.194 }, 00:19:21.194 { 00:19:21.194 "name": "BaseBdev2", 00:19:21.194 "uuid": "9d7f357d-9b1a-5ec1-b670-8a3e6dd50065", 00:19:21.194 "is_configured": true, 00:19:21.194 "data_offset": 2048, 00:19:21.194 "data_size": 63488 00:19:21.194 }, 00:19:21.194 { 00:19:21.194 "name": "BaseBdev3", 00:19:21.194 "uuid": "cde954a1-a1fd-5616-b2e0-41443d1c1b6e", 00:19:21.194 "is_configured": true, 00:19:21.194 "data_offset": 2048, 00:19:21.194 "data_size": 63488 00:19:21.194 } 00:19:21.194 ] 00:19:21.194 }' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82472 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82472 ']' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82472 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82472 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82472' 00:19:21.194 killing process with pid 82472 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82472 00:19:21.194 Received shutdown signal, test time was about 60.000000 seconds 00:19:21.194 00:19:21.194 Latency(us) 00:19:21.194 [2024-11-26T20:32:14.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.194 [2024-11-26T20:32:14.749Z] =================================================================================================================== 00:19:21.194 [2024-11-26T20:32:14.749Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.194 [2024-11-26 20:32:14.644855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.194 [2024-11-26 20:32:14.645009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.194 20:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82472 00:19:21.194 [2024-11-26 20:32:14.645086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.194 [2024-11-26 20:32:14.645101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:21.762 [2024-11-26 20:32:15.073273] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.142 20:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:23.142 00:19:23.142 real 0m24.091s 00:19:23.142 user 0m31.013s 00:19:23.142 sys 0m2.875s 00:19:23.142 20:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.142 20:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.142 ************************************ 00:19:23.142 END TEST raid5f_rebuild_test_sb 00:19:23.142 ************************************ 00:19:23.142 20:32:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:23.142 20:32:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:19:23.142 20:32:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:23.142 20:32:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.142 20:32:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.142 ************************************ 00:19:23.142 START TEST raid5f_state_function_test 00:19:23.142 ************************************ 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:23.142 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:23.143 Process raid pid: 83233 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83233 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83233' 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83233 00:19:23.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83233 ']' 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.143 20:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.143 [2024-11-26 20:32:16.448201] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:19:23.143 [2024-11-26 20:32:16.448363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.143 [2024-11-26 20:32:16.607969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.403 [2024-11-26 20:32:16.729614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.403 [2024-11-26 20:32:16.949867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.403 [2024-11-26 20:32:16.950006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.972 [2024-11-26 20:32:17.322324] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.972 [2024-11-26 20:32:17.322428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.972 [2024-11-26 20:32:17.322463] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.972 [2024-11-26 20:32:17.322488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.972 [2024-11-26 20:32:17.322506] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:23.972 [2024-11-26 20:32:17.322539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:23.972 [2024-11-26 20:32:17.322566] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:23.972 [2024-11-26 20:32:17.322588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.972 "name": "Existed_Raid", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.972 "strip_size_kb": 64, 00:19:23.972 "state": "configuring", 00:19:23.972 "raid_level": "raid5f", 00:19:23.972 "superblock": false, 00:19:23.972 "num_base_bdevs": 4, 00:19:23.972 "num_base_bdevs_discovered": 0, 00:19:23.972 "num_base_bdevs_operational": 4, 00:19:23.972 "base_bdevs_list": [ 00:19:23.972 { 00:19:23.972 "name": "BaseBdev1", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.972 "is_configured": false, 00:19:23.972 "data_offset": 0, 00:19:23.972 "data_size": 0 00:19:23.972 }, 00:19:23.972 { 00:19:23.972 "name": "BaseBdev2", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.972 "is_configured": false, 00:19:23.972 "data_offset": 0, 00:19:23.972 "data_size": 0 00:19:23.972 }, 00:19:23.972 { 00:19:23.972 "name": "BaseBdev3", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.972 "is_configured": false, 00:19:23.972 "data_offset": 0, 00:19:23.972 "data_size": 0 00:19:23.972 }, 00:19:23.972 { 00:19:23.972 "name": "BaseBdev4", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.972 "is_configured": false, 00:19:23.972 "data_offset": 0, 00:19:23.972 "data_size": 0 00:19:23.972 } 00:19:23.972 ] 00:19:23.972 }' 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.972 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.238 [2024-11-26 20:32:17.777504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.238 [2024-11-26 20:32:17.777600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.238 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 [2024-11-26 20:32:17.789499] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.509 [2024-11-26 20:32:17.789598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.509 [2024-11-26 20:32:17.789638] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.509 [2024-11-26 20:32:17.789667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.509 [2024-11-26 20:32:17.789696] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.509 [2024-11-26 20:32:17.789723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.509 [2024-11-26 20:32:17.789767] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:24.509 [2024-11-26 20:32:17.789824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 [2024-11-26 20:32:17.840886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.509 BaseBdev1 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.509 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 [ 00:19:24.509 { 00:19:24.509 "name": "BaseBdev1", 00:19:24.510 "aliases": [ 00:19:24.510 "17d24515-2826-4cc5-bb96-e5f2a999c927" 00:19:24.510 ], 00:19:24.510 "product_name": "Malloc disk", 00:19:24.510 "block_size": 512, 00:19:24.510 "num_blocks": 65536, 00:19:24.510 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:24.510 "assigned_rate_limits": { 00:19:24.510 "rw_ios_per_sec": 0, 00:19:24.510 "rw_mbytes_per_sec": 0, 00:19:24.510 "r_mbytes_per_sec": 0, 00:19:24.510 "w_mbytes_per_sec": 0 00:19:24.510 }, 00:19:24.510 "claimed": true, 00:19:24.510 "claim_type": "exclusive_write", 00:19:24.510 "zoned": false, 00:19:24.510 "supported_io_types": { 00:19:24.510 "read": true, 00:19:24.510 "write": true, 00:19:24.510 "unmap": true, 00:19:24.510 "flush": true, 00:19:24.510 "reset": true, 00:19:24.510 "nvme_admin": false, 00:19:24.510 "nvme_io": false, 00:19:24.510 "nvme_io_md": false, 00:19:24.510 "write_zeroes": true, 00:19:24.510 "zcopy": true, 00:19:24.510 "get_zone_info": false, 00:19:24.510 "zone_management": false, 00:19:24.510 "zone_append": false, 00:19:24.510 "compare": false, 00:19:24.510 "compare_and_write": false, 00:19:24.510 "abort": true, 00:19:24.510 "seek_hole": false, 00:19:24.510 "seek_data": false, 00:19:24.510 "copy": true, 00:19:24.510 "nvme_iov_md": false 00:19:24.510 }, 00:19:24.510 "memory_domains": [ 00:19:24.510 { 00:19:24.510 "dma_device_id": "system", 00:19:24.510 "dma_device_type": 1 00:19:24.510 }, 00:19:24.510 { 00:19:24.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.510 "dma_device_type": 2 00:19:24.510 } 00:19:24.510 ], 00:19:24.510 "driver_specific": {} 00:19:24.510 } 00:19:24.510 ] 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.510 "name": "Existed_Raid", 00:19:24.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.510 "strip_size_kb": 64, 00:19:24.510 "state": "configuring", 00:19:24.510 "raid_level": "raid5f", 00:19:24.510 "superblock": false, 00:19:24.510 "num_base_bdevs": 4, 00:19:24.510 "num_base_bdevs_discovered": 1, 00:19:24.510 "num_base_bdevs_operational": 4, 00:19:24.510 "base_bdevs_list": [ 00:19:24.510 { 00:19:24.510 "name": "BaseBdev1", 00:19:24.510 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:24.510 "is_configured": true, 00:19:24.510 "data_offset": 0, 00:19:24.510 "data_size": 65536 00:19:24.510 }, 00:19:24.510 { 00:19:24.510 "name": "BaseBdev2", 00:19:24.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.510 "is_configured": false, 00:19:24.510 "data_offset": 0, 00:19:24.510 "data_size": 0 00:19:24.510 }, 00:19:24.510 { 00:19:24.510 "name": "BaseBdev3", 00:19:24.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.510 "is_configured": false, 00:19:24.510 "data_offset": 0, 00:19:24.510 "data_size": 0 00:19:24.510 }, 00:19:24.510 { 00:19:24.510 "name": "BaseBdev4", 00:19:24.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.510 "is_configured": false, 00:19:24.510 "data_offset": 0, 00:19:24.510 "data_size": 0 00:19:24.510 } 00:19:24.510 ] 00:19:24.510 }' 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.510 20:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 [2024-11-26 20:32:18.336102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.079 [2024-11-26 20:32:18.336215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 [2024-11-26 20:32:18.348126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.079 [2024-11-26 20:32:18.350300] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.079 [2024-11-26 20:32:18.350392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.079 [2024-11-26 20:32:18.350424] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:25.079 [2024-11-26 20:32:18.350452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:25.079 [2024-11-26 20:32:18.350474] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:25.079 [2024-11-26 20:32:18.350509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.079 "name": "Existed_Raid", 00:19:25.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.079 "strip_size_kb": 64, 00:19:25.079 "state": "configuring", 00:19:25.079 "raid_level": "raid5f", 00:19:25.079 "superblock": false, 00:19:25.079 "num_base_bdevs": 4, 00:19:25.079 "num_base_bdevs_discovered": 1, 00:19:25.079 "num_base_bdevs_operational": 4, 00:19:25.079 "base_bdevs_list": [ 00:19:25.079 { 00:19:25.079 "name": "BaseBdev1", 00:19:25.079 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:25.079 "is_configured": true, 00:19:25.079 "data_offset": 0, 00:19:25.079 "data_size": 65536 00:19:25.079 }, 00:19:25.079 { 00:19:25.079 "name": "BaseBdev2", 00:19:25.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.079 "is_configured": false, 00:19:25.079 "data_offset": 0, 00:19:25.079 "data_size": 0 00:19:25.079 }, 00:19:25.079 { 00:19:25.079 "name": "BaseBdev3", 00:19:25.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.079 "is_configured": false, 00:19:25.079 "data_offset": 0, 00:19:25.079 "data_size": 0 00:19:25.079 }, 00:19:25.079 { 00:19:25.079 "name": "BaseBdev4", 00:19:25.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.079 "is_configured": false, 00:19:25.079 "data_offset": 0, 00:19:25.079 "data_size": 0 00:19:25.079 } 00:19:25.079 ] 00:19:25.079 }' 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.079 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.337 [2024-11-26 20:32:18.857925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.337 BaseBdev2 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:25.337 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:25.338 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.338 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.338 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:25.338 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.338 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 [ 00:19:25.338 { 00:19:25.338 "name": "BaseBdev2", 00:19:25.338 "aliases": [ 00:19:25.338 "590b3c59-a138-4681-ab45-af9c068b1eea" 00:19:25.338 ], 00:19:25.338 "product_name": "Malloc disk", 00:19:25.338 "block_size": 512, 00:19:25.338 "num_blocks": 65536, 00:19:25.338 "uuid": "590b3c59-a138-4681-ab45-af9c068b1eea", 00:19:25.338 "assigned_rate_limits": { 00:19:25.338 "rw_ios_per_sec": 0, 00:19:25.338 "rw_mbytes_per_sec": 0, 00:19:25.338 "r_mbytes_per_sec": 0, 00:19:25.338 "w_mbytes_per_sec": 0 00:19:25.338 }, 00:19:25.338 "claimed": true, 00:19:25.338 "claim_type": "exclusive_write", 00:19:25.338 "zoned": false, 00:19:25.338 "supported_io_types": { 00:19:25.338 "read": true, 00:19:25.338 "write": true, 00:19:25.338 "unmap": true, 00:19:25.338 "flush": true, 00:19:25.596 "reset": true, 00:19:25.596 "nvme_admin": false, 00:19:25.596 "nvme_io": false, 00:19:25.596 "nvme_io_md": false, 00:19:25.596 "write_zeroes": true, 00:19:25.596 "zcopy": true, 00:19:25.596 "get_zone_info": false, 00:19:25.596 "zone_management": false, 00:19:25.596 "zone_append": false, 00:19:25.596 "compare": false, 00:19:25.596 "compare_and_write": false, 00:19:25.596 "abort": true, 00:19:25.597 "seek_hole": false, 00:19:25.597 "seek_data": false, 00:19:25.597 "copy": true, 00:19:25.597 "nvme_iov_md": false 00:19:25.597 }, 00:19:25.597 "memory_domains": [ 00:19:25.597 { 00:19:25.597 "dma_device_id": "system", 00:19:25.597 "dma_device_type": 1 00:19:25.597 }, 00:19:25.597 { 00:19:25.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.597 "dma_device_type": 2 00:19:25.597 } 00:19:25.597 ], 00:19:25.597 "driver_specific": {} 00:19:25.597 } 00:19:25.597 ] 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.597 "name": "Existed_Raid", 00:19:25.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.597 "strip_size_kb": 64, 00:19:25.597 "state": "configuring", 00:19:25.597 "raid_level": "raid5f", 00:19:25.597 "superblock": false, 00:19:25.597 "num_base_bdevs": 4, 00:19:25.597 "num_base_bdevs_discovered": 2, 00:19:25.597 "num_base_bdevs_operational": 4, 00:19:25.597 "base_bdevs_list": [ 00:19:25.597 { 00:19:25.597 "name": "BaseBdev1", 00:19:25.597 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:25.597 "is_configured": true, 00:19:25.597 "data_offset": 0, 00:19:25.597 "data_size": 65536 00:19:25.597 }, 00:19:25.597 { 00:19:25.597 "name": "BaseBdev2", 00:19:25.597 "uuid": "590b3c59-a138-4681-ab45-af9c068b1eea", 00:19:25.597 "is_configured": true, 00:19:25.597 "data_offset": 0, 00:19:25.597 "data_size": 65536 00:19:25.597 }, 00:19:25.597 { 00:19:25.597 "name": "BaseBdev3", 00:19:25.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.597 "is_configured": false, 00:19:25.597 "data_offset": 0, 00:19:25.597 "data_size": 0 00:19:25.597 }, 00:19:25.597 { 00:19:25.597 "name": "BaseBdev4", 00:19:25.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.597 "is_configured": false, 00:19:25.597 "data_offset": 0, 00:19:25.597 "data_size": 0 00:19:25.597 } 00:19:25.597 ] 00:19:25.597 }' 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.597 20:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.854 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:25.854 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.854 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.111 [2024-11-26 20:32:19.417434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:26.111 BaseBdev3 00:19:26.111 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.111 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:26.111 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:26.111 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.112 [ 00:19:26.112 { 00:19:26.112 "name": "BaseBdev3", 00:19:26.112 "aliases": [ 00:19:26.112 "ccdf3169-aca4-4569-ad72-24bc04f56e31" 00:19:26.112 ], 00:19:26.112 "product_name": "Malloc disk", 00:19:26.112 "block_size": 512, 00:19:26.112 "num_blocks": 65536, 00:19:26.112 "uuid": "ccdf3169-aca4-4569-ad72-24bc04f56e31", 00:19:26.112 "assigned_rate_limits": { 00:19:26.112 "rw_ios_per_sec": 0, 00:19:26.112 "rw_mbytes_per_sec": 0, 00:19:26.112 "r_mbytes_per_sec": 0, 00:19:26.112 "w_mbytes_per_sec": 0 00:19:26.112 }, 00:19:26.112 "claimed": true, 00:19:26.112 "claim_type": "exclusive_write", 00:19:26.112 "zoned": false, 00:19:26.112 "supported_io_types": { 00:19:26.112 "read": true, 00:19:26.112 "write": true, 00:19:26.112 "unmap": true, 00:19:26.112 "flush": true, 00:19:26.112 "reset": true, 00:19:26.112 "nvme_admin": false, 00:19:26.112 "nvme_io": false, 00:19:26.112 "nvme_io_md": false, 00:19:26.112 "write_zeroes": true, 00:19:26.112 "zcopy": true, 00:19:26.112 "get_zone_info": false, 00:19:26.112 "zone_management": false, 00:19:26.112 "zone_append": false, 00:19:26.112 "compare": false, 00:19:26.112 "compare_and_write": false, 00:19:26.112 "abort": true, 00:19:26.112 "seek_hole": false, 00:19:26.112 "seek_data": false, 00:19:26.112 "copy": true, 00:19:26.112 "nvme_iov_md": false 00:19:26.112 }, 00:19:26.112 "memory_domains": [ 00:19:26.112 { 00:19:26.112 "dma_device_id": "system", 00:19:26.112 "dma_device_type": 1 00:19:26.112 }, 00:19:26.112 { 00:19:26.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.112 "dma_device_type": 2 00:19:26.112 } 00:19:26.112 ], 00:19:26.112 "driver_specific": {} 00:19:26.112 } 00:19:26.112 ] 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.112 "name": "Existed_Raid", 00:19:26.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.112 "strip_size_kb": 64, 00:19:26.112 "state": "configuring", 00:19:26.112 "raid_level": "raid5f", 00:19:26.112 "superblock": false, 00:19:26.112 "num_base_bdevs": 4, 00:19:26.112 "num_base_bdevs_discovered": 3, 00:19:26.112 "num_base_bdevs_operational": 4, 00:19:26.112 "base_bdevs_list": [ 00:19:26.112 { 00:19:26.112 "name": "BaseBdev1", 00:19:26.112 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:26.112 "is_configured": true, 00:19:26.112 "data_offset": 0, 00:19:26.112 "data_size": 65536 00:19:26.112 }, 00:19:26.112 { 00:19:26.112 "name": "BaseBdev2", 00:19:26.112 "uuid": "590b3c59-a138-4681-ab45-af9c068b1eea", 00:19:26.112 "is_configured": true, 00:19:26.112 "data_offset": 0, 00:19:26.112 "data_size": 65536 00:19:26.112 }, 00:19:26.112 { 00:19:26.112 "name": "BaseBdev3", 00:19:26.112 "uuid": "ccdf3169-aca4-4569-ad72-24bc04f56e31", 00:19:26.112 "is_configured": true, 00:19:26.112 "data_offset": 0, 00:19:26.112 "data_size": 65536 00:19:26.112 }, 00:19:26.112 { 00:19:26.112 "name": "BaseBdev4", 00:19:26.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.112 "is_configured": false, 00:19:26.112 "data_offset": 0, 00:19:26.112 "data_size": 0 00:19:26.112 } 00:19:26.112 ] 00:19:26.112 }' 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.112 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.678 [2024-11-26 20:32:19.982621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:26.678 [2024-11-26 20:32:19.982800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:26.678 [2024-11-26 20:32:19.982834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:26.678 [2024-11-26 20:32:19.983153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:26.678 [2024-11-26 20:32:19.991627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:26.678 [2024-11-26 20:32:19.991697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:26.678 [2024-11-26 20:32:19.992098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.678 BaseBdev4 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.678 20:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.678 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.678 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:26.678 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.678 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.678 [ 00:19:26.678 { 00:19:26.678 "name": "BaseBdev4", 00:19:26.678 "aliases": [ 00:19:26.678 "eda9cfca-23f8-486d-a85a-02926c8f134b" 00:19:26.678 ], 00:19:26.678 "product_name": "Malloc disk", 00:19:26.678 "block_size": 512, 00:19:26.678 "num_blocks": 65536, 00:19:26.678 "uuid": "eda9cfca-23f8-486d-a85a-02926c8f134b", 00:19:26.678 "assigned_rate_limits": { 00:19:26.678 "rw_ios_per_sec": 0, 00:19:26.678 "rw_mbytes_per_sec": 0, 00:19:26.678 "r_mbytes_per_sec": 0, 00:19:26.678 "w_mbytes_per_sec": 0 00:19:26.678 }, 00:19:26.678 "claimed": true, 00:19:26.678 "claim_type": "exclusive_write", 00:19:26.678 "zoned": false, 00:19:26.678 "supported_io_types": { 00:19:26.678 "read": true, 00:19:26.678 "write": true, 00:19:26.678 "unmap": true, 00:19:26.678 "flush": true, 00:19:26.678 "reset": true, 00:19:26.678 "nvme_admin": false, 00:19:26.678 "nvme_io": false, 00:19:26.678 "nvme_io_md": false, 00:19:26.678 "write_zeroes": true, 00:19:26.678 "zcopy": true, 00:19:26.678 "get_zone_info": false, 00:19:26.678 "zone_management": false, 00:19:26.678 "zone_append": false, 00:19:26.678 "compare": false, 00:19:26.678 "compare_and_write": false, 00:19:26.678 "abort": true, 00:19:26.678 "seek_hole": false, 00:19:26.678 "seek_data": false, 00:19:26.679 "copy": true, 00:19:26.679 "nvme_iov_md": false 00:19:26.679 }, 00:19:26.679 "memory_domains": [ 00:19:26.679 { 00:19:26.679 "dma_device_id": "system", 00:19:26.679 "dma_device_type": 1 00:19:26.679 }, 00:19:26.679 { 00:19:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.679 "dma_device_type": 2 00:19:26.679 } 00:19:26.679 ], 00:19:26.679 "driver_specific": {} 00:19:26.679 } 00:19:26.679 ] 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.679 "name": "Existed_Raid", 00:19:26.679 "uuid": "6e151c22-dce3-412c-8161-5161d334755f", 00:19:26.679 "strip_size_kb": 64, 00:19:26.679 "state": "online", 00:19:26.679 "raid_level": "raid5f", 00:19:26.679 "superblock": false, 00:19:26.679 "num_base_bdevs": 4, 00:19:26.679 "num_base_bdevs_discovered": 4, 00:19:26.679 "num_base_bdevs_operational": 4, 00:19:26.679 "base_bdevs_list": [ 00:19:26.679 { 00:19:26.679 "name": "BaseBdev1", 00:19:26.679 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:26.679 "is_configured": true, 00:19:26.679 "data_offset": 0, 00:19:26.679 "data_size": 65536 00:19:26.679 }, 00:19:26.679 { 00:19:26.679 "name": "BaseBdev2", 00:19:26.679 "uuid": "590b3c59-a138-4681-ab45-af9c068b1eea", 00:19:26.679 "is_configured": true, 00:19:26.679 "data_offset": 0, 00:19:26.679 "data_size": 65536 00:19:26.679 }, 00:19:26.679 { 00:19:26.679 "name": "BaseBdev3", 00:19:26.679 "uuid": "ccdf3169-aca4-4569-ad72-24bc04f56e31", 00:19:26.679 "is_configured": true, 00:19:26.679 "data_offset": 0, 00:19:26.679 "data_size": 65536 00:19:26.679 }, 00:19:26.679 { 00:19:26.679 "name": "BaseBdev4", 00:19:26.679 "uuid": "eda9cfca-23f8-486d-a85a-02926c8f134b", 00:19:26.679 "is_configured": true, 00:19:26.679 "data_offset": 0, 00:19:26.679 "data_size": 65536 00:19:26.679 } 00:19:26.679 ] 00:19:26.679 }' 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.679 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.246 [2024-11-26 20:32:20.533498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.246 "name": "Existed_Raid", 00:19:27.246 "aliases": [ 00:19:27.246 "6e151c22-dce3-412c-8161-5161d334755f" 00:19:27.246 ], 00:19:27.246 "product_name": "Raid Volume", 00:19:27.246 "block_size": 512, 00:19:27.246 "num_blocks": 196608, 00:19:27.246 "uuid": "6e151c22-dce3-412c-8161-5161d334755f", 00:19:27.246 "assigned_rate_limits": { 00:19:27.246 "rw_ios_per_sec": 0, 00:19:27.246 "rw_mbytes_per_sec": 0, 00:19:27.246 "r_mbytes_per_sec": 0, 00:19:27.246 "w_mbytes_per_sec": 0 00:19:27.246 }, 00:19:27.246 "claimed": false, 00:19:27.246 "zoned": false, 00:19:27.246 "supported_io_types": { 00:19:27.246 "read": true, 00:19:27.246 "write": true, 00:19:27.246 "unmap": false, 00:19:27.246 "flush": false, 00:19:27.246 "reset": true, 00:19:27.246 "nvme_admin": false, 00:19:27.246 "nvme_io": false, 00:19:27.246 "nvme_io_md": false, 00:19:27.246 "write_zeroes": true, 00:19:27.246 "zcopy": false, 00:19:27.246 "get_zone_info": false, 00:19:27.246 "zone_management": false, 00:19:27.246 "zone_append": false, 00:19:27.246 "compare": false, 00:19:27.246 "compare_and_write": false, 00:19:27.246 "abort": false, 00:19:27.246 "seek_hole": false, 00:19:27.246 "seek_data": false, 00:19:27.246 "copy": false, 00:19:27.246 "nvme_iov_md": false 00:19:27.246 }, 00:19:27.246 "driver_specific": { 00:19:27.246 "raid": { 00:19:27.246 "uuid": "6e151c22-dce3-412c-8161-5161d334755f", 00:19:27.246 "strip_size_kb": 64, 00:19:27.246 "state": "online", 00:19:27.246 "raid_level": "raid5f", 00:19:27.246 "superblock": false, 00:19:27.246 "num_base_bdevs": 4, 00:19:27.246 "num_base_bdevs_discovered": 4, 00:19:27.246 "num_base_bdevs_operational": 4, 00:19:27.246 "base_bdevs_list": [ 00:19:27.246 { 00:19:27.246 "name": "BaseBdev1", 00:19:27.246 "uuid": "17d24515-2826-4cc5-bb96-e5f2a999c927", 00:19:27.246 "is_configured": true, 00:19:27.246 "data_offset": 0, 00:19:27.246 "data_size": 65536 00:19:27.246 }, 00:19:27.246 { 00:19:27.246 "name": "BaseBdev2", 00:19:27.246 "uuid": "590b3c59-a138-4681-ab45-af9c068b1eea", 00:19:27.246 "is_configured": true, 00:19:27.246 "data_offset": 0, 00:19:27.246 "data_size": 65536 00:19:27.246 }, 00:19:27.246 { 00:19:27.246 "name": "BaseBdev3", 00:19:27.246 "uuid": "ccdf3169-aca4-4569-ad72-24bc04f56e31", 00:19:27.246 "is_configured": true, 00:19:27.246 "data_offset": 0, 00:19:27.246 "data_size": 65536 00:19:27.246 }, 00:19:27.246 { 00:19:27.246 "name": "BaseBdev4", 00:19:27.246 "uuid": "eda9cfca-23f8-486d-a85a-02926c8f134b", 00:19:27.246 "is_configured": true, 00:19:27.246 "data_offset": 0, 00:19:27.246 "data_size": 65536 00:19:27.246 } 00:19:27.246 ] 00:19:27.246 } 00:19:27.246 } 00:19:27.246 }' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:27.246 BaseBdev2 00:19:27.246 BaseBdev3 00:19:27.246 BaseBdev4' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.246 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:27.247 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.247 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.247 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.505 [2024-11-26 20:32:20.872757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.505 20:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.505 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.505 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.506 "name": "Existed_Raid", 00:19:27.506 "uuid": "6e151c22-dce3-412c-8161-5161d334755f", 00:19:27.506 "strip_size_kb": 64, 00:19:27.506 "state": "online", 00:19:27.506 "raid_level": "raid5f", 00:19:27.506 "superblock": false, 00:19:27.506 "num_base_bdevs": 4, 00:19:27.506 "num_base_bdevs_discovered": 3, 00:19:27.506 "num_base_bdevs_operational": 3, 00:19:27.506 "base_bdevs_list": [ 00:19:27.506 { 00:19:27.506 "name": null, 00:19:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.506 "is_configured": false, 00:19:27.506 "data_offset": 0, 00:19:27.506 "data_size": 65536 00:19:27.506 }, 00:19:27.506 { 00:19:27.506 "name": "BaseBdev2", 00:19:27.506 "uuid": "590b3c59-a138-4681-ab45-af9c068b1eea", 00:19:27.506 "is_configured": true, 00:19:27.506 "data_offset": 0, 00:19:27.506 "data_size": 65536 00:19:27.506 }, 00:19:27.506 { 00:19:27.506 "name": "BaseBdev3", 00:19:27.506 "uuid": "ccdf3169-aca4-4569-ad72-24bc04f56e31", 00:19:27.506 "is_configured": true, 00:19:27.506 "data_offset": 0, 00:19:27.506 "data_size": 65536 00:19:27.506 }, 00:19:27.506 { 00:19:27.506 "name": "BaseBdev4", 00:19:27.506 "uuid": "eda9cfca-23f8-486d-a85a-02926c8f134b", 00:19:27.506 "is_configured": true, 00:19:27.506 "data_offset": 0, 00:19:27.506 "data_size": 65536 00:19:27.506 } 00:19:27.506 ] 00:19:27.506 }' 00:19:27.506 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.506 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.073 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.073 [2024-11-26 20:32:21.528441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:28.073 [2024-11-26 20:32:21.528595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.332 [2024-11-26 20:32:21.633567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.332 [2024-11-26 20:32:21.689492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.332 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.332 [2024-11-26 20:32:21.849781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:28.332 [2024-11-26 20:32:21.849896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.591 20:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:28.592 20:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.592 BaseBdev2 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.592 [ 00:19:28.592 { 00:19:28.592 "name": "BaseBdev2", 00:19:28.592 "aliases": [ 00:19:28.592 "ca7ff445-eb5d-4508-903a-47e22108961f" 00:19:28.592 ], 00:19:28.592 "product_name": "Malloc disk", 00:19:28.592 "block_size": 512, 00:19:28.592 "num_blocks": 65536, 00:19:28.592 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:28.592 "assigned_rate_limits": { 00:19:28.592 "rw_ios_per_sec": 0, 00:19:28.592 "rw_mbytes_per_sec": 0, 00:19:28.592 "r_mbytes_per_sec": 0, 00:19:28.592 "w_mbytes_per_sec": 0 00:19:28.592 }, 00:19:28.592 "claimed": false, 00:19:28.592 "zoned": false, 00:19:28.592 "supported_io_types": { 00:19:28.592 "read": true, 00:19:28.592 "write": true, 00:19:28.592 "unmap": true, 00:19:28.592 "flush": true, 00:19:28.592 "reset": true, 00:19:28.592 "nvme_admin": false, 00:19:28.592 "nvme_io": false, 00:19:28.592 "nvme_io_md": false, 00:19:28.592 "write_zeroes": true, 00:19:28.592 "zcopy": true, 00:19:28.592 "get_zone_info": false, 00:19:28.592 "zone_management": false, 00:19:28.592 "zone_append": false, 00:19:28.592 "compare": false, 00:19:28.592 "compare_and_write": false, 00:19:28.592 "abort": true, 00:19:28.592 "seek_hole": false, 00:19:28.592 "seek_data": false, 00:19:28.592 "copy": true, 00:19:28.592 "nvme_iov_md": false 00:19:28.592 }, 00:19:28.592 "memory_domains": [ 00:19:28.592 { 00:19:28.592 "dma_device_id": "system", 00:19:28.592 "dma_device_type": 1 00:19:28.592 }, 00:19:28.592 { 00:19:28.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.592 "dma_device_type": 2 00:19:28.592 } 00:19:28.592 ], 00:19:28.592 "driver_specific": {} 00:19:28.592 } 00:19:28.592 ] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.592 BaseBdev3 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.592 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.851 [ 00:19:28.851 { 00:19:28.851 "name": "BaseBdev3", 00:19:28.851 "aliases": [ 00:19:28.851 "7e98a6f5-4ec4-466f-b98f-14dad57b771e" 00:19:28.851 ], 00:19:28.851 "product_name": "Malloc disk", 00:19:28.851 "block_size": 512, 00:19:28.851 "num_blocks": 65536, 00:19:28.851 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:28.851 "assigned_rate_limits": { 00:19:28.851 "rw_ios_per_sec": 0, 00:19:28.851 "rw_mbytes_per_sec": 0, 00:19:28.851 "r_mbytes_per_sec": 0, 00:19:28.851 "w_mbytes_per_sec": 0 00:19:28.851 }, 00:19:28.851 "claimed": false, 00:19:28.851 "zoned": false, 00:19:28.851 "supported_io_types": { 00:19:28.851 "read": true, 00:19:28.851 "write": true, 00:19:28.851 "unmap": true, 00:19:28.851 "flush": true, 00:19:28.851 "reset": true, 00:19:28.851 "nvme_admin": false, 00:19:28.851 "nvme_io": false, 00:19:28.851 "nvme_io_md": false, 00:19:28.851 "write_zeroes": true, 00:19:28.851 "zcopy": true, 00:19:28.851 "get_zone_info": false, 00:19:28.851 "zone_management": false, 00:19:28.851 "zone_append": false, 00:19:28.851 "compare": false, 00:19:28.851 "compare_and_write": false, 00:19:28.851 "abort": true, 00:19:28.851 "seek_hole": false, 00:19:28.851 "seek_data": false, 00:19:28.851 "copy": true, 00:19:28.851 "nvme_iov_md": false 00:19:28.851 }, 00:19:28.851 "memory_domains": [ 00:19:28.851 { 00:19:28.851 "dma_device_id": "system", 00:19:28.851 "dma_device_type": 1 00:19:28.851 }, 00:19:28.851 { 00:19:28.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.851 "dma_device_type": 2 00:19:28.851 } 00:19:28.851 ], 00:19:28.851 "driver_specific": {} 00:19:28.851 } 00:19:28.851 ] 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.851 BaseBdev4 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.851 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 [ 00:19:28.852 { 00:19:28.852 "name": "BaseBdev4", 00:19:28.852 "aliases": [ 00:19:28.852 "63f28005-6a4a-4257-b0c6-1cb1bdc353aa" 00:19:28.852 ], 00:19:28.852 "product_name": "Malloc disk", 00:19:28.852 "block_size": 512, 00:19:28.852 "num_blocks": 65536, 00:19:28.852 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:28.852 "assigned_rate_limits": { 00:19:28.852 "rw_ios_per_sec": 0, 00:19:28.852 "rw_mbytes_per_sec": 0, 00:19:28.852 "r_mbytes_per_sec": 0, 00:19:28.852 "w_mbytes_per_sec": 0 00:19:28.852 }, 00:19:28.852 "claimed": false, 00:19:28.852 "zoned": false, 00:19:28.852 "supported_io_types": { 00:19:28.852 "read": true, 00:19:28.852 "write": true, 00:19:28.852 "unmap": true, 00:19:28.852 "flush": true, 00:19:28.852 "reset": true, 00:19:28.852 "nvme_admin": false, 00:19:28.852 "nvme_io": false, 00:19:28.852 "nvme_io_md": false, 00:19:28.852 "write_zeroes": true, 00:19:28.852 "zcopy": true, 00:19:28.852 "get_zone_info": false, 00:19:28.852 "zone_management": false, 00:19:28.852 "zone_append": false, 00:19:28.852 "compare": false, 00:19:28.852 "compare_and_write": false, 00:19:28.852 "abort": true, 00:19:28.852 "seek_hole": false, 00:19:28.852 "seek_data": false, 00:19:28.852 "copy": true, 00:19:28.852 "nvme_iov_md": false 00:19:28.852 }, 00:19:28.852 "memory_domains": [ 00:19:28.852 { 00:19:28.852 "dma_device_id": "system", 00:19:28.852 "dma_device_type": 1 00:19:28.852 }, 00:19:28.852 { 00:19:28.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.852 "dma_device_type": 2 00:19:28.852 } 00:19:28.852 ], 00:19:28.852 "driver_specific": {} 00:19:28.852 } 00:19:28.852 ] 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 [2024-11-26 20:32:22.265413] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.852 [2024-11-26 20:32:22.265510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.852 [2024-11-26 20:32:22.265584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.852 [2024-11-26 20:32:22.267734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.852 [2024-11-26 20:32:22.267835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.852 "name": "Existed_Raid", 00:19:28.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.852 "strip_size_kb": 64, 00:19:28.852 "state": "configuring", 00:19:28.852 "raid_level": "raid5f", 00:19:28.852 "superblock": false, 00:19:28.852 "num_base_bdevs": 4, 00:19:28.852 "num_base_bdevs_discovered": 3, 00:19:28.852 "num_base_bdevs_operational": 4, 00:19:28.852 "base_bdevs_list": [ 00:19:28.852 { 00:19:28.852 "name": "BaseBdev1", 00:19:28.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.852 "is_configured": false, 00:19:28.852 "data_offset": 0, 00:19:28.852 "data_size": 0 00:19:28.852 }, 00:19:28.852 { 00:19:28.852 "name": "BaseBdev2", 00:19:28.852 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:28.852 "is_configured": true, 00:19:28.852 "data_offset": 0, 00:19:28.852 "data_size": 65536 00:19:28.852 }, 00:19:28.852 { 00:19:28.852 "name": "BaseBdev3", 00:19:28.852 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:28.852 "is_configured": true, 00:19:28.852 "data_offset": 0, 00:19:28.852 "data_size": 65536 00:19:28.852 }, 00:19:28.852 { 00:19:28.852 "name": "BaseBdev4", 00:19:28.852 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:28.852 "is_configured": true, 00:19:28.852 "data_offset": 0, 00:19:28.852 "data_size": 65536 00:19:28.852 } 00:19:28.852 ] 00:19:28.852 }' 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.852 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.421 [2024-11-26 20:32:22.756613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.421 "name": "Existed_Raid", 00:19:29.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.421 "strip_size_kb": 64, 00:19:29.421 "state": "configuring", 00:19:29.421 "raid_level": "raid5f", 00:19:29.421 "superblock": false, 00:19:29.421 "num_base_bdevs": 4, 00:19:29.421 "num_base_bdevs_discovered": 2, 00:19:29.421 "num_base_bdevs_operational": 4, 00:19:29.421 "base_bdevs_list": [ 00:19:29.421 { 00:19:29.421 "name": "BaseBdev1", 00:19:29.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.421 "is_configured": false, 00:19:29.421 "data_offset": 0, 00:19:29.421 "data_size": 0 00:19:29.421 }, 00:19:29.421 { 00:19:29.421 "name": null, 00:19:29.421 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:29.421 "is_configured": false, 00:19:29.421 "data_offset": 0, 00:19:29.421 "data_size": 65536 00:19:29.421 }, 00:19:29.421 { 00:19:29.421 "name": "BaseBdev3", 00:19:29.421 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:29.421 "is_configured": true, 00:19:29.421 "data_offset": 0, 00:19:29.421 "data_size": 65536 00:19:29.421 }, 00:19:29.421 { 00:19:29.421 "name": "BaseBdev4", 00:19:29.421 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:29.421 "is_configured": true, 00:19:29.421 "data_offset": 0, 00:19:29.421 "data_size": 65536 00:19:29.421 } 00:19:29.421 ] 00:19:29.421 }' 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.421 20:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.071 [2024-11-26 20:32:23.315412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.071 BaseBdev1 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.071 [ 00:19:30.071 { 00:19:30.071 "name": "BaseBdev1", 00:19:30.071 "aliases": [ 00:19:30.071 "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0" 00:19:30.071 ], 00:19:30.071 "product_name": "Malloc disk", 00:19:30.071 "block_size": 512, 00:19:30.071 "num_blocks": 65536, 00:19:30.071 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:30.071 "assigned_rate_limits": { 00:19:30.071 "rw_ios_per_sec": 0, 00:19:30.071 "rw_mbytes_per_sec": 0, 00:19:30.071 "r_mbytes_per_sec": 0, 00:19:30.071 "w_mbytes_per_sec": 0 00:19:30.071 }, 00:19:30.071 "claimed": true, 00:19:30.071 "claim_type": "exclusive_write", 00:19:30.071 "zoned": false, 00:19:30.071 "supported_io_types": { 00:19:30.071 "read": true, 00:19:30.071 "write": true, 00:19:30.071 "unmap": true, 00:19:30.071 "flush": true, 00:19:30.071 "reset": true, 00:19:30.071 "nvme_admin": false, 00:19:30.071 "nvme_io": false, 00:19:30.071 "nvme_io_md": false, 00:19:30.071 "write_zeroes": true, 00:19:30.071 "zcopy": true, 00:19:30.071 "get_zone_info": false, 00:19:30.071 "zone_management": false, 00:19:30.071 "zone_append": false, 00:19:30.071 "compare": false, 00:19:30.071 "compare_and_write": false, 00:19:30.071 "abort": true, 00:19:30.071 "seek_hole": false, 00:19:30.071 "seek_data": false, 00:19:30.071 "copy": true, 00:19:30.071 "nvme_iov_md": false 00:19:30.071 }, 00:19:30.071 "memory_domains": [ 00:19:30.071 { 00:19:30.071 "dma_device_id": "system", 00:19:30.071 "dma_device_type": 1 00:19:30.071 }, 00:19:30.071 { 00:19:30.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.071 "dma_device_type": 2 00:19:30.071 } 00:19:30.071 ], 00:19:30.071 "driver_specific": {} 00:19:30.071 } 00:19:30.071 ] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.071 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.072 "name": "Existed_Raid", 00:19:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.072 "strip_size_kb": 64, 00:19:30.072 "state": "configuring", 00:19:30.072 "raid_level": "raid5f", 00:19:30.072 "superblock": false, 00:19:30.072 "num_base_bdevs": 4, 00:19:30.072 "num_base_bdevs_discovered": 3, 00:19:30.072 "num_base_bdevs_operational": 4, 00:19:30.072 "base_bdevs_list": [ 00:19:30.072 { 00:19:30.072 "name": "BaseBdev1", 00:19:30.072 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:30.072 "is_configured": true, 00:19:30.072 "data_offset": 0, 00:19:30.072 "data_size": 65536 00:19:30.072 }, 00:19:30.072 { 00:19:30.072 "name": null, 00:19:30.072 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:30.072 "is_configured": false, 00:19:30.072 "data_offset": 0, 00:19:30.072 "data_size": 65536 00:19:30.072 }, 00:19:30.072 { 00:19:30.072 "name": "BaseBdev3", 00:19:30.072 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:30.072 "is_configured": true, 00:19:30.072 "data_offset": 0, 00:19:30.072 "data_size": 65536 00:19:30.072 }, 00:19:30.072 { 00:19:30.072 "name": "BaseBdev4", 00:19:30.072 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:30.072 "is_configured": true, 00:19:30.072 "data_offset": 0, 00:19:30.072 "data_size": 65536 00:19:30.072 } 00:19:30.072 ] 00:19:30.072 }' 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.072 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.332 [2024-11-26 20:32:23.826635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.332 "name": "Existed_Raid", 00:19:30.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.332 "strip_size_kb": 64, 00:19:30.332 "state": "configuring", 00:19:30.332 "raid_level": "raid5f", 00:19:30.332 "superblock": false, 00:19:30.332 "num_base_bdevs": 4, 00:19:30.332 "num_base_bdevs_discovered": 2, 00:19:30.332 "num_base_bdevs_operational": 4, 00:19:30.332 "base_bdevs_list": [ 00:19:30.332 { 00:19:30.332 "name": "BaseBdev1", 00:19:30.332 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:30.332 "is_configured": true, 00:19:30.332 "data_offset": 0, 00:19:30.332 "data_size": 65536 00:19:30.332 }, 00:19:30.332 { 00:19:30.332 "name": null, 00:19:30.332 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:30.332 "is_configured": false, 00:19:30.332 "data_offset": 0, 00:19:30.332 "data_size": 65536 00:19:30.332 }, 00:19:30.332 { 00:19:30.332 "name": null, 00:19:30.332 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:30.332 "is_configured": false, 00:19:30.332 "data_offset": 0, 00:19:30.332 "data_size": 65536 00:19:30.332 }, 00:19:30.332 { 00:19:30.332 "name": "BaseBdev4", 00:19:30.332 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:30.332 "is_configured": true, 00:19:30.332 "data_offset": 0, 00:19:30.332 "data_size": 65536 00:19:30.332 } 00:19:30.332 ] 00:19:30.332 }' 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.332 20:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.901 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:30.901 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.901 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.901 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.901 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.902 [2024-11-26 20:32:24.317836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.902 "name": "Existed_Raid", 00:19:30.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.902 "strip_size_kb": 64, 00:19:30.902 "state": "configuring", 00:19:30.902 "raid_level": "raid5f", 00:19:30.902 "superblock": false, 00:19:30.902 "num_base_bdevs": 4, 00:19:30.902 "num_base_bdevs_discovered": 3, 00:19:30.902 "num_base_bdevs_operational": 4, 00:19:30.902 "base_bdevs_list": [ 00:19:30.902 { 00:19:30.902 "name": "BaseBdev1", 00:19:30.902 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:30.902 "is_configured": true, 00:19:30.902 "data_offset": 0, 00:19:30.902 "data_size": 65536 00:19:30.902 }, 00:19:30.902 { 00:19:30.902 "name": null, 00:19:30.902 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:30.902 "is_configured": false, 00:19:30.902 "data_offset": 0, 00:19:30.902 "data_size": 65536 00:19:30.902 }, 00:19:30.902 { 00:19:30.902 "name": "BaseBdev3", 00:19:30.902 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:30.902 "is_configured": true, 00:19:30.902 "data_offset": 0, 00:19:30.902 "data_size": 65536 00:19:30.902 }, 00:19:30.902 { 00:19:30.902 "name": "BaseBdev4", 00:19:30.902 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:30.902 "is_configured": true, 00:19:30.902 "data_offset": 0, 00:19:30.902 "data_size": 65536 00:19:30.902 } 00:19:30.902 ] 00:19:30.902 }' 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.902 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.471 [2024-11-26 20:32:24.817053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.471 "name": "Existed_Raid", 00:19:31.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.471 "strip_size_kb": 64, 00:19:31.471 "state": "configuring", 00:19:31.471 "raid_level": "raid5f", 00:19:31.471 "superblock": false, 00:19:31.471 "num_base_bdevs": 4, 00:19:31.471 "num_base_bdevs_discovered": 2, 00:19:31.471 "num_base_bdevs_operational": 4, 00:19:31.471 "base_bdevs_list": [ 00:19:31.471 { 00:19:31.471 "name": null, 00:19:31.471 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:31.471 "is_configured": false, 00:19:31.471 "data_offset": 0, 00:19:31.471 "data_size": 65536 00:19:31.471 }, 00:19:31.471 { 00:19:31.471 "name": null, 00:19:31.471 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:31.471 "is_configured": false, 00:19:31.471 "data_offset": 0, 00:19:31.471 "data_size": 65536 00:19:31.471 }, 00:19:31.471 { 00:19:31.471 "name": "BaseBdev3", 00:19:31.471 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:31.471 "is_configured": true, 00:19:31.471 "data_offset": 0, 00:19:31.471 "data_size": 65536 00:19:31.471 }, 00:19:31.471 { 00:19:31.471 "name": "BaseBdev4", 00:19:31.471 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:31.471 "is_configured": true, 00:19:31.471 "data_offset": 0, 00:19:31.471 "data_size": 65536 00:19:31.471 } 00:19:31.471 ] 00:19:31.471 }' 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.471 20:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.036 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.036 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.037 [2024-11-26 20:32:25.443802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.037 "name": "Existed_Raid", 00:19:32.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.037 "strip_size_kb": 64, 00:19:32.037 "state": "configuring", 00:19:32.037 "raid_level": "raid5f", 00:19:32.037 "superblock": false, 00:19:32.037 "num_base_bdevs": 4, 00:19:32.037 "num_base_bdevs_discovered": 3, 00:19:32.037 "num_base_bdevs_operational": 4, 00:19:32.037 "base_bdevs_list": [ 00:19:32.037 { 00:19:32.037 "name": null, 00:19:32.037 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:32.037 "is_configured": false, 00:19:32.037 "data_offset": 0, 00:19:32.037 "data_size": 65536 00:19:32.037 }, 00:19:32.037 { 00:19:32.037 "name": "BaseBdev2", 00:19:32.037 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:32.037 "is_configured": true, 00:19:32.037 "data_offset": 0, 00:19:32.037 "data_size": 65536 00:19:32.037 }, 00:19:32.037 { 00:19:32.037 "name": "BaseBdev3", 00:19:32.037 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:32.037 "is_configured": true, 00:19:32.037 "data_offset": 0, 00:19:32.037 "data_size": 65536 00:19:32.037 }, 00:19:32.037 { 00:19:32.037 "name": "BaseBdev4", 00:19:32.037 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:32.037 "is_configured": true, 00:19:32.037 "data_offset": 0, 00:19:32.037 "data_size": 65536 00:19:32.037 } 00:19:32.037 ] 00:19:32.037 }' 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.037 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.602 20:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.602 [2024-11-26 20:32:26.012383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:32.602 [2024-11-26 20:32:26.012512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:32.602 [2024-11-26 20:32:26.012539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:32.602 [2024-11-26 20:32:26.012829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:32.602 [2024-11-26 20:32:26.020345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:32.602 [2024-11-26 20:32:26.020408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:19:32.602 id_bdev 0x617000008200 00:19:32.602 [2024-11-26 20:32:26.020731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.602 [ 00:19:32.602 { 00:19:32.602 "name": "NewBaseBdev", 00:19:32.602 "aliases": [ 00:19:32.602 "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0" 00:19:32.602 ], 00:19:32.602 "product_name": "Malloc disk", 00:19:32.602 "block_size": 512, 00:19:32.602 "num_blocks": 65536, 00:19:32.602 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:32.602 "assigned_rate_limits": { 00:19:32.602 "rw_ios_per_sec": 0, 00:19:32.602 "rw_mbytes_per_sec": 0, 00:19:32.602 "r_mbytes_per_sec": 0, 00:19:32.602 "w_mbytes_per_sec": 0 00:19:32.602 }, 00:19:32.602 "claimed": true, 00:19:32.602 "claim_type": "exclusive_write", 00:19:32.602 "zoned": false, 00:19:32.602 "supported_io_types": { 00:19:32.602 "read": true, 00:19:32.602 "write": true, 00:19:32.602 "unmap": true, 00:19:32.602 "flush": true, 00:19:32.602 "reset": true, 00:19:32.602 "nvme_admin": false, 00:19:32.602 "nvme_io": false, 00:19:32.602 "nvme_io_md": false, 00:19:32.602 "write_zeroes": true, 00:19:32.602 "zcopy": true, 00:19:32.602 "get_zone_info": false, 00:19:32.602 "zone_management": false, 00:19:32.602 "zone_append": false, 00:19:32.602 "compare": false, 00:19:32.602 "compare_and_write": false, 00:19:32.602 "abort": true, 00:19:32.602 "seek_hole": false, 00:19:32.602 "seek_data": false, 00:19:32.602 "copy": true, 00:19:32.602 "nvme_iov_md": false 00:19:32.602 }, 00:19:32.602 "memory_domains": [ 00:19:32.602 { 00:19:32.602 "dma_device_id": "system", 00:19:32.602 "dma_device_type": 1 00:19:32.602 }, 00:19:32.602 { 00:19:32.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.602 "dma_device_type": 2 00:19:32.602 } 00:19:32.602 ], 00:19:32.602 "driver_specific": {} 00:19:32.602 } 00:19:32.602 ] 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.602 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.603 "name": "Existed_Raid", 00:19:32.603 "uuid": "4a193940-9fb2-4f2b-869f-0b3391c6770f", 00:19:32.603 "strip_size_kb": 64, 00:19:32.603 "state": "online", 00:19:32.603 "raid_level": "raid5f", 00:19:32.603 "superblock": false, 00:19:32.603 "num_base_bdevs": 4, 00:19:32.603 "num_base_bdevs_discovered": 4, 00:19:32.603 "num_base_bdevs_operational": 4, 00:19:32.603 "base_bdevs_list": [ 00:19:32.603 { 00:19:32.603 "name": "NewBaseBdev", 00:19:32.603 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:32.603 "is_configured": true, 00:19:32.603 "data_offset": 0, 00:19:32.603 "data_size": 65536 00:19:32.603 }, 00:19:32.603 { 00:19:32.603 "name": "BaseBdev2", 00:19:32.603 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:32.603 "is_configured": true, 00:19:32.603 "data_offset": 0, 00:19:32.603 "data_size": 65536 00:19:32.603 }, 00:19:32.603 { 00:19:32.603 "name": "BaseBdev3", 00:19:32.603 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:32.603 "is_configured": true, 00:19:32.603 "data_offset": 0, 00:19:32.603 "data_size": 65536 00:19:32.603 }, 00:19:32.603 { 00:19:32.603 "name": "BaseBdev4", 00:19:32.603 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:32.603 "is_configured": true, 00:19:32.603 "data_offset": 0, 00:19:32.603 "data_size": 65536 00:19:32.603 } 00:19:32.603 ] 00:19:32.603 }' 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.603 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.170 [2024-11-26 20:32:26.560964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.170 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.170 "name": "Existed_Raid", 00:19:33.170 "aliases": [ 00:19:33.170 "4a193940-9fb2-4f2b-869f-0b3391c6770f" 00:19:33.170 ], 00:19:33.170 "product_name": "Raid Volume", 00:19:33.170 "block_size": 512, 00:19:33.170 "num_blocks": 196608, 00:19:33.170 "uuid": "4a193940-9fb2-4f2b-869f-0b3391c6770f", 00:19:33.170 "assigned_rate_limits": { 00:19:33.170 "rw_ios_per_sec": 0, 00:19:33.170 "rw_mbytes_per_sec": 0, 00:19:33.170 "r_mbytes_per_sec": 0, 00:19:33.170 "w_mbytes_per_sec": 0 00:19:33.170 }, 00:19:33.170 "claimed": false, 00:19:33.170 "zoned": false, 00:19:33.170 "supported_io_types": { 00:19:33.170 "read": true, 00:19:33.170 "write": true, 00:19:33.170 "unmap": false, 00:19:33.170 "flush": false, 00:19:33.170 "reset": true, 00:19:33.170 "nvme_admin": false, 00:19:33.170 "nvme_io": false, 00:19:33.170 "nvme_io_md": false, 00:19:33.170 "write_zeroes": true, 00:19:33.170 "zcopy": false, 00:19:33.170 "get_zone_info": false, 00:19:33.170 "zone_management": false, 00:19:33.170 "zone_append": false, 00:19:33.170 "compare": false, 00:19:33.170 "compare_and_write": false, 00:19:33.170 "abort": false, 00:19:33.170 "seek_hole": false, 00:19:33.170 "seek_data": false, 00:19:33.170 "copy": false, 00:19:33.170 "nvme_iov_md": false 00:19:33.170 }, 00:19:33.170 "driver_specific": { 00:19:33.170 "raid": { 00:19:33.170 "uuid": "4a193940-9fb2-4f2b-869f-0b3391c6770f", 00:19:33.170 "strip_size_kb": 64, 00:19:33.170 "state": "online", 00:19:33.170 "raid_level": "raid5f", 00:19:33.170 "superblock": false, 00:19:33.170 "num_base_bdevs": 4, 00:19:33.170 "num_base_bdevs_discovered": 4, 00:19:33.170 "num_base_bdevs_operational": 4, 00:19:33.170 "base_bdevs_list": [ 00:19:33.170 { 00:19:33.170 "name": "NewBaseBdev", 00:19:33.170 "uuid": "b8162cb0-e7e7-4bb0-bb5a-9de5b9ce15f0", 00:19:33.170 "is_configured": true, 00:19:33.170 "data_offset": 0, 00:19:33.170 "data_size": 65536 00:19:33.170 }, 00:19:33.170 { 00:19:33.170 "name": "BaseBdev2", 00:19:33.170 "uuid": "ca7ff445-eb5d-4508-903a-47e22108961f", 00:19:33.170 "is_configured": true, 00:19:33.170 "data_offset": 0, 00:19:33.170 "data_size": 65536 00:19:33.170 }, 00:19:33.170 { 00:19:33.170 "name": "BaseBdev3", 00:19:33.170 "uuid": "7e98a6f5-4ec4-466f-b98f-14dad57b771e", 00:19:33.170 "is_configured": true, 00:19:33.170 "data_offset": 0, 00:19:33.170 "data_size": 65536 00:19:33.170 }, 00:19:33.170 { 00:19:33.170 "name": "BaseBdev4", 00:19:33.170 "uuid": "63f28005-6a4a-4257-b0c6-1cb1bdc353aa", 00:19:33.170 "is_configured": true, 00:19:33.170 "data_offset": 0, 00:19:33.170 "data_size": 65536 00:19:33.171 } 00:19:33.171 ] 00:19:33.171 } 00:19:33.171 } 00:19:33.171 }' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:33.171 BaseBdev2 00:19:33.171 BaseBdev3 00:19:33.171 BaseBdev4' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.171 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.429 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.429 [2024-11-26 20:32:26.844211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:33.430 [2024-11-26 20:32:26.844300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.430 [2024-11-26 20:32:26.844432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.430 [2024-11-26 20:32:26.844803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.430 [2024-11-26 20:32:26.844867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83233 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83233 ']' 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83233 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83233 00:19:33.430 killing process with pid 83233 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83233' 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83233 00:19:33.430 [2024-11-26 20:32:26.891221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.430 20:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83233 00:19:33.997 [2024-11-26 20:32:27.318428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:35.375 20:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:35.375 00:19:35.375 real 0m12.180s 00:19:35.375 user 0m19.316s 00:19:35.375 sys 0m2.169s 00:19:35.375 ************************************ 00:19:35.375 END TEST raid5f_state_function_test 00:19:35.375 ************************************ 00:19:35.375 20:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.376 20:32:28 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:35.376 20:32:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:35.376 20:32:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.376 20:32:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.376 ************************************ 00:19:35.376 START TEST raid5f_state_function_test_sb 00:19:35.376 ************************************ 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83910 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83910' 00:19:35.376 Process raid pid: 83910 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83910 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83910 ']' 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.376 20:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.376 [2024-11-26 20:32:28.699023] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:19:35.376 [2024-11-26 20:32:28.699253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.376 [2024-11-26 20:32:28.875969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.635 [2024-11-26 20:32:28.998372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.894 [2024-11-26 20:32:29.217419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.894 [2024-11-26 20:32:29.217575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.153 [2024-11-26 20:32:29.563623] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:36.153 [2024-11-26 20:32:29.563728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:36.153 [2024-11-26 20:32:29.563761] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:36.153 [2024-11-26 20:32:29.563786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:36.153 [2024-11-26 20:32:29.563805] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:36.153 [2024-11-26 20:32:29.563826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:36.153 [2024-11-26 20:32:29.563845] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:36.153 [2024-11-26 20:32:29.563865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.153 "name": "Existed_Raid", 00:19:36.153 "uuid": "5014cd25-f153-4ca1-888c-5957ccac753f", 00:19:36.153 "strip_size_kb": 64, 00:19:36.153 "state": "configuring", 00:19:36.153 "raid_level": "raid5f", 00:19:36.153 "superblock": true, 00:19:36.153 "num_base_bdevs": 4, 00:19:36.153 "num_base_bdevs_discovered": 0, 00:19:36.153 "num_base_bdevs_operational": 4, 00:19:36.153 "base_bdevs_list": [ 00:19:36.153 { 00:19:36.153 "name": "BaseBdev1", 00:19:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.153 "is_configured": false, 00:19:36.153 "data_offset": 0, 00:19:36.153 "data_size": 0 00:19:36.153 }, 00:19:36.153 { 00:19:36.153 "name": "BaseBdev2", 00:19:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.153 "is_configured": false, 00:19:36.153 "data_offset": 0, 00:19:36.153 "data_size": 0 00:19:36.153 }, 00:19:36.153 { 00:19:36.153 "name": "BaseBdev3", 00:19:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.153 "is_configured": false, 00:19:36.153 "data_offset": 0, 00:19:36.153 "data_size": 0 00:19:36.153 }, 00:19:36.153 { 00:19:36.153 "name": "BaseBdev4", 00:19:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.153 "is_configured": false, 00:19:36.153 "data_offset": 0, 00:19:36.153 "data_size": 0 00:19:36.153 } 00:19:36.153 ] 00:19:36.153 }' 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.153 20:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 [2024-11-26 20:32:30.054730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:36.722 [2024-11-26 20:32:30.054829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 [2024-11-26 20:32:30.066694] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:36.722 [2024-11-26 20:32:30.066737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:36.722 [2024-11-26 20:32:30.066746] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:36.722 [2024-11-26 20:32:30.066756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:36.722 [2024-11-26 20:32:30.066763] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:36.722 [2024-11-26 20:32:30.066772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:36.722 [2024-11-26 20:32:30.066778] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:36.722 [2024-11-26 20:32:30.066786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 [2024-11-26 20:32:30.117319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.722 BaseBdev1 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 [ 00:19:36.722 { 00:19:36.722 "name": "BaseBdev1", 00:19:36.722 "aliases": [ 00:19:36.722 "0874ebb6-07bb-4bd6-83cc-1258411fc9fd" 00:19:36.722 ], 00:19:36.722 "product_name": "Malloc disk", 00:19:36.722 "block_size": 512, 00:19:36.722 "num_blocks": 65536, 00:19:36.722 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:36.722 "assigned_rate_limits": { 00:19:36.722 "rw_ios_per_sec": 0, 00:19:36.722 "rw_mbytes_per_sec": 0, 00:19:36.722 "r_mbytes_per_sec": 0, 00:19:36.722 "w_mbytes_per_sec": 0 00:19:36.722 }, 00:19:36.722 "claimed": true, 00:19:36.722 "claim_type": "exclusive_write", 00:19:36.722 "zoned": false, 00:19:36.722 "supported_io_types": { 00:19:36.722 "read": true, 00:19:36.722 "write": true, 00:19:36.722 "unmap": true, 00:19:36.722 "flush": true, 00:19:36.722 "reset": true, 00:19:36.722 "nvme_admin": false, 00:19:36.722 "nvme_io": false, 00:19:36.722 "nvme_io_md": false, 00:19:36.722 "write_zeroes": true, 00:19:36.722 "zcopy": true, 00:19:36.722 "get_zone_info": false, 00:19:36.722 "zone_management": false, 00:19:36.722 "zone_append": false, 00:19:36.722 "compare": false, 00:19:36.722 "compare_and_write": false, 00:19:36.722 "abort": true, 00:19:36.722 "seek_hole": false, 00:19:36.722 "seek_data": false, 00:19:36.722 "copy": true, 00:19:36.722 "nvme_iov_md": false 00:19:36.722 }, 00:19:36.722 "memory_domains": [ 00:19:36.722 { 00:19:36.722 "dma_device_id": "system", 00:19:36.722 "dma_device_type": 1 00:19:36.722 }, 00:19:36.722 { 00:19:36.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.722 "dma_device_type": 2 00:19:36.722 } 00:19:36.722 ], 00:19:36.722 "driver_specific": {} 00:19:36.722 } 00:19:36.722 ] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.722 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.722 "name": "Existed_Raid", 00:19:36.722 "uuid": "c2f8cfd8-e270-4908-a057-8ad33e955c4c", 00:19:36.722 "strip_size_kb": 64, 00:19:36.722 "state": "configuring", 00:19:36.722 "raid_level": "raid5f", 00:19:36.722 "superblock": true, 00:19:36.722 "num_base_bdevs": 4, 00:19:36.722 "num_base_bdevs_discovered": 1, 00:19:36.722 "num_base_bdevs_operational": 4, 00:19:36.722 "base_bdevs_list": [ 00:19:36.722 { 00:19:36.722 "name": "BaseBdev1", 00:19:36.722 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:36.722 "is_configured": true, 00:19:36.722 "data_offset": 2048, 00:19:36.722 "data_size": 63488 00:19:36.722 }, 00:19:36.722 { 00:19:36.722 "name": "BaseBdev2", 00:19:36.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.722 "is_configured": false, 00:19:36.722 "data_offset": 0, 00:19:36.722 "data_size": 0 00:19:36.722 }, 00:19:36.722 { 00:19:36.722 "name": "BaseBdev3", 00:19:36.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.722 "is_configured": false, 00:19:36.722 "data_offset": 0, 00:19:36.722 "data_size": 0 00:19:36.722 }, 00:19:36.723 { 00:19:36.723 "name": "BaseBdev4", 00:19:36.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.723 "is_configured": false, 00:19:36.723 "data_offset": 0, 00:19:36.723 "data_size": 0 00:19:36.723 } 00:19:36.723 ] 00:19:36.723 }' 00:19:36.723 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.723 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.291 [2024-11-26 20:32:30.552674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:37.291 [2024-11-26 20:32:30.552786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.291 [2024-11-26 20:32:30.564736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.291 [2024-11-26 20:32:30.566843] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.291 [2024-11-26 20:32:30.566925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.291 [2024-11-26 20:32:30.566962] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:37.291 [2024-11-26 20:32:30.566997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:37.291 [2024-11-26 20:32:30.567027] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:37.291 [2024-11-26 20:32:30.567075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.291 "name": "Existed_Raid", 00:19:37.291 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:37.291 "strip_size_kb": 64, 00:19:37.291 "state": "configuring", 00:19:37.291 "raid_level": "raid5f", 00:19:37.291 "superblock": true, 00:19:37.291 "num_base_bdevs": 4, 00:19:37.291 "num_base_bdevs_discovered": 1, 00:19:37.291 "num_base_bdevs_operational": 4, 00:19:37.291 "base_bdevs_list": [ 00:19:37.291 { 00:19:37.291 "name": "BaseBdev1", 00:19:37.291 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:37.291 "is_configured": true, 00:19:37.291 "data_offset": 2048, 00:19:37.291 "data_size": 63488 00:19:37.291 }, 00:19:37.291 { 00:19:37.291 "name": "BaseBdev2", 00:19:37.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.291 "is_configured": false, 00:19:37.291 "data_offset": 0, 00:19:37.291 "data_size": 0 00:19:37.291 }, 00:19:37.291 { 00:19:37.291 "name": "BaseBdev3", 00:19:37.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.291 "is_configured": false, 00:19:37.291 "data_offset": 0, 00:19:37.291 "data_size": 0 00:19:37.291 }, 00:19:37.291 { 00:19:37.291 "name": "BaseBdev4", 00:19:37.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.291 "is_configured": false, 00:19:37.291 "data_offset": 0, 00:19:37.291 "data_size": 0 00:19:37.291 } 00:19:37.291 ] 00:19:37.291 }' 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.291 20:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.551 [2024-11-26 20:32:31.092871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.551 BaseBdev2 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.551 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.810 [ 00:19:37.810 { 00:19:37.810 "name": "BaseBdev2", 00:19:37.810 "aliases": [ 00:19:37.810 "e1a413aa-3642-49f8-a060-572c92c69a30" 00:19:37.810 ], 00:19:37.810 "product_name": "Malloc disk", 00:19:37.810 "block_size": 512, 00:19:37.810 "num_blocks": 65536, 00:19:37.810 "uuid": "e1a413aa-3642-49f8-a060-572c92c69a30", 00:19:37.810 "assigned_rate_limits": { 00:19:37.810 "rw_ios_per_sec": 0, 00:19:37.810 "rw_mbytes_per_sec": 0, 00:19:37.810 "r_mbytes_per_sec": 0, 00:19:37.810 "w_mbytes_per_sec": 0 00:19:37.810 }, 00:19:37.810 "claimed": true, 00:19:37.810 "claim_type": "exclusive_write", 00:19:37.810 "zoned": false, 00:19:37.810 "supported_io_types": { 00:19:37.810 "read": true, 00:19:37.810 "write": true, 00:19:37.810 "unmap": true, 00:19:37.810 "flush": true, 00:19:37.810 "reset": true, 00:19:37.810 "nvme_admin": false, 00:19:37.810 "nvme_io": false, 00:19:37.810 "nvme_io_md": false, 00:19:37.810 "write_zeroes": true, 00:19:37.810 "zcopy": true, 00:19:37.810 "get_zone_info": false, 00:19:37.810 "zone_management": false, 00:19:37.810 "zone_append": false, 00:19:37.810 "compare": false, 00:19:37.810 "compare_and_write": false, 00:19:37.810 "abort": true, 00:19:37.810 "seek_hole": false, 00:19:37.810 "seek_data": false, 00:19:37.810 "copy": true, 00:19:37.810 "nvme_iov_md": false 00:19:37.810 }, 00:19:37.810 "memory_domains": [ 00:19:37.810 { 00:19:37.810 "dma_device_id": "system", 00:19:37.810 "dma_device_type": 1 00:19:37.810 }, 00:19:37.810 { 00:19:37.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.810 "dma_device_type": 2 00:19:37.810 } 00:19:37.810 ], 00:19:37.810 "driver_specific": {} 00:19:37.810 } 00:19:37.810 ] 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.810 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.810 "name": "Existed_Raid", 00:19:37.810 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:37.810 "strip_size_kb": 64, 00:19:37.810 "state": "configuring", 00:19:37.810 "raid_level": "raid5f", 00:19:37.810 "superblock": true, 00:19:37.810 "num_base_bdevs": 4, 00:19:37.810 "num_base_bdevs_discovered": 2, 00:19:37.810 "num_base_bdevs_operational": 4, 00:19:37.810 "base_bdevs_list": [ 00:19:37.811 { 00:19:37.811 "name": "BaseBdev1", 00:19:37.811 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:37.811 "is_configured": true, 00:19:37.811 "data_offset": 2048, 00:19:37.811 "data_size": 63488 00:19:37.811 }, 00:19:37.811 { 00:19:37.811 "name": "BaseBdev2", 00:19:37.811 "uuid": "e1a413aa-3642-49f8-a060-572c92c69a30", 00:19:37.811 "is_configured": true, 00:19:37.811 "data_offset": 2048, 00:19:37.811 "data_size": 63488 00:19:37.811 }, 00:19:37.811 { 00:19:37.811 "name": "BaseBdev3", 00:19:37.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.811 "is_configured": false, 00:19:37.811 "data_offset": 0, 00:19:37.811 "data_size": 0 00:19:37.811 }, 00:19:37.811 { 00:19:37.811 "name": "BaseBdev4", 00:19:37.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.811 "is_configured": false, 00:19:37.811 "data_offset": 0, 00:19:37.811 "data_size": 0 00:19:37.811 } 00:19:37.811 ] 00:19:37.811 }' 00:19:37.811 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.811 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.070 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:38.070 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.070 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.329 [2024-11-26 20:32:31.641555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.329 BaseBdev3 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.329 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.329 [ 00:19:38.329 { 00:19:38.329 "name": "BaseBdev3", 00:19:38.329 "aliases": [ 00:19:38.329 "787922bc-33be-4197-bc8a-95a0d442ee55" 00:19:38.329 ], 00:19:38.329 "product_name": "Malloc disk", 00:19:38.329 "block_size": 512, 00:19:38.330 "num_blocks": 65536, 00:19:38.330 "uuid": "787922bc-33be-4197-bc8a-95a0d442ee55", 00:19:38.330 "assigned_rate_limits": { 00:19:38.330 "rw_ios_per_sec": 0, 00:19:38.330 "rw_mbytes_per_sec": 0, 00:19:38.330 "r_mbytes_per_sec": 0, 00:19:38.330 "w_mbytes_per_sec": 0 00:19:38.330 }, 00:19:38.330 "claimed": true, 00:19:38.330 "claim_type": "exclusive_write", 00:19:38.330 "zoned": false, 00:19:38.330 "supported_io_types": { 00:19:38.330 "read": true, 00:19:38.330 "write": true, 00:19:38.330 "unmap": true, 00:19:38.330 "flush": true, 00:19:38.330 "reset": true, 00:19:38.330 "nvme_admin": false, 00:19:38.330 "nvme_io": false, 00:19:38.330 "nvme_io_md": false, 00:19:38.330 "write_zeroes": true, 00:19:38.330 "zcopy": true, 00:19:38.330 "get_zone_info": false, 00:19:38.330 "zone_management": false, 00:19:38.330 "zone_append": false, 00:19:38.330 "compare": false, 00:19:38.330 "compare_and_write": false, 00:19:38.330 "abort": true, 00:19:38.330 "seek_hole": false, 00:19:38.330 "seek_data": false, 00:19:38.330 "copy": true, 00:19:38.330 "nvme_iov_md": false 00:19:38.330 }, 00:19:38.330 "memory_domains": [ 00:19:38.330 { 00:19:38.330 "dma_device_id": "system", 00:19:38.330 "dma_device_type": 1 00:19:38.330 }, 00:19:38.330 { 00:19:38.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.330 "dma_device_type": 2 00:19:38.330 } 00:19:38.330 ], 00:19:38.330 "driver_specific": {} 00:19:38.330 } 00:19:38.330 ] 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.330 "name": "Existed_Raid", 00:19:38.330 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:38.330 "strip_size_kb": 64, 00:19:38.330 "state": "configuring", 00:19:38.330 "raid_level": "raid5f", 00:19:38.330 "superblock": true, 00:19:38.330 "num_base_bdevs": 4, 00:19:38.330 "num_base_bdevs_discovered": 3, 00:19:38.330 "num_base_bdevs_operational": 4, 00:19:38.330 "base_bdevs_list": [ 00:19:38.330 { 00:19:38.330 "name": "BaseBdev1", 00:19:38.330 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:38.330 "is_configured": true, 00:19:38.330 "data_offset": 2048, 00:19:38.330 "data_size": 63488 00:19:38.330 }, 00:19:38.330 { 00:19:38.330 "name": "BaseBdev2", 00:19:38.330 "uuid": "e1a413aa-3642-49f8-a060-572c92c69a30", 00:19:38.330 "is_configured": true, 00:19:38.330 "data_offset": 2048, 00:19:38.330 "data_size": 63488 00:19:38.330 }, 00:19:38.330 { 00:19:38.330 "name": "BaseBdev3", 00:19:38.330 "uuid": "787922bc-33be-4197-bc8a-95a0d442ee55", 00:19:38.330 "is_configured": true, 00:19:38.330 "data_offset": 2048, 00:19:38.330 "data_size": 63488 00:19:38.330 }, 00:19:38.330 { 00:19:38.330 "name": "BaseBdev4", 00:19:38.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.330 "is_configured": false, 00:19:38.330 "data_offset": 0, 00:19:38.330 "data_size": 0 00:19:38.330 } 00:19:38.330 ] 00:19:38.330 }' 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.330 20:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.899 [2024-11-26 20:32:32.217089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:38.899 [2024-11-26 20:32:32.217550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:38.899 [2024-11-26 20:32:32.217613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.899 [2024-11-26 20:32:32.217941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:38.899 BaseBdev4 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.899 [2024-11-26 20:32:32.227024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:38.899 [2024-11-26 20:32:32.227094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:38.899 [2024-11-26 20:32:32.227445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.899 [ 00:19:38.899 { 00:19:38.899 "name": "BaseBdev4", 00:19:38.899 "aliases": [ 00:19:38.899 "4dab8ee2-00ab-48b3-8787-85d68a4f3d7e" 00:19:38.899 ], 00:19:38.899 "product_name": "Malloc disk", 00:19:38.899 "block_size": 512, 00:19:38.899 "num_blocks": 65536, 00:19:38.899 "uuid": "4dab8ee2-00ab-48b3-8787-85d68a4f3d7e", 00:19:38.899 "assigned_rate_limits": { 00:19:38.899 "rw_ios_per_sec": 0, 00:19:38.899 "rw_mbytes_per_sec": 0, 00:19:38.899 "r_mbytes_per_sec": 0, 00:19:38.899 "w_mbytes_per_sec": 0 00:19:38.899 }, 00:19:38.899 "claimed": true, 00:19:38.899 "claim_type": "exclusive_write", 00:19:38.899 "zoned": false, 00:19:38.899 "supported_io_types": { 00:19:38.899 "read": true, 00:19:38.899 "write": true, 00:19:38.899 "unmap": true, 00:19:38.899 "flush": true, 00:19:38.899 "reset": true, 00:19:38.899 "nvme_admin": false, 00:19:38.899 "nvme_io": false, 00:19:38.899 "nvme_io_md": false, 00:19:38.899 "write_zeroes": true, 00:19:38.899 "zcopy": true, 00:19:38.899 "get_zone_info": false, 00:19:38.899 "zone_management": false, 00:19:38.899 "zone_append": false, 00:19:38.899 "compare": false, 00:19:38.899 "compare_and_write": false, 00:19:38.899 "abort": true, 00:19:38.899 "seek_hole": false, 00:19:38.899 "seek_data": false, 00:19:38.899 "copy": true, 00:19:38.899 "nvme_iov_md": false 00:19:38.899 }, 00:19:38.899 "memory_domains": [ 00:19:38.899 { 00:19:38.899 "dma_device_id": "system", 00:19:38.899 "dma_device_type": 1 00:19:38.899 }, 00:19:38.899 { 00:19:38.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.899 "dma_device_type": 2 00:19:38.899 } 00:19:38.899 ], 00:19:38.899 "driver_specific": {} 00:19:38.899 } 00:19:38.899 ] 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.899 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.900 "name": "Existed_Raid", 00:19:38.900 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:38.900 "strip_size_kb": 64, 00:19:38.900 "state": "online", 00:19:38.900 "raid_level": "raid5f", 00:19:38.900 "superblock": true, 00:19:38.900 "num_base_bdevs": 4, 00:19:38.900 "num_base_bdevs_discovered": 4, 00:19:38.900 "num_base_bdevs_operational": 4, 00:19:38.900 "base_bdevs_list": [ 00:19:38.900 { 00:19:38.900 "name": "BaseBdev1", 00:19:38.900 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 }, 00:19:38.900 { 00:19:38.900 "name": "BaseBdev2", 00:19:38.900 "uuid": "e1a413aa-3642-49f8-a060-572c92c69a30", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 }, 00:19:38.900 { 00:19:38.900 "name": "BaseBdev3", 00:19:38.900 "uuid": "787922bc-33be-4197-bc8a-95a0d442ee55", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 }, 00:19:38.900 { 00:19:38.900 "name": "BaseBdev4", 00:19:38.900 "uuid": "4dab8ee2-00ab-48b3-8787-85d68a4f3d7e", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 } 00:19:38.900 ] 00:19:38.900 }' 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.900 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.469 [2024-11-26 20:32:32.727943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.469 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.469 "name": "Existed_Raid", 00:19:39.469 "aliases": [ 00:19:39.469 "2185cbdc-f01c-4e4a-8412-d6a6584679e5" 00:19:39.469 ], 00:19:39.469 "product_name": "Raid Volume", 00:19:39.469 "block_size": 512, 00:19:39.469 "num_blocks": 190464, 00:19:39.469 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:39.469 "assigned_rate_limits": { 00:19:39.469 "rw_ios_per_sec": 0, 00:19:39.469 "rw_mbytes_per_sec": 0, 00:19:39.469 "r_mbytes_per_sec": 0, 00:19:39.469 "w_mbytes_per_sec": 0 00:19:39.469 }, 00:19:39.469 "claimed": false, 00:19:39.469 "zoned": false, 00:19:39.469 "supported_io_types": { 00:19:39.469 "read": true, 00:19:39.469 "write": true, 00:19:39.469 "unmap": false, 00:19:39.469 "flush": false, 00:19:39.469 "reset": true, 00:19:39.469 "nvme_admin": false, 00:19:39.469 "nvme_io": false, 00:19:39.469 "nvme_io_md": false, 00:19:39.469 "write_zeroes": true, 00:19:39.469 "zcopy": false, 00:19:39.469 "get_zone_info": false, 00:19:39.469 "zone_management": false, 00:19:39.469 "zone_append": false, 00:19:39.469 "compare": false, 00:19:39.469 "compare_and_write": false, 00:19:39.469 "abort": false, 00:19:39.469 "seek_hole": false, 00:19:39.469 "seek_data": false, 00:19:39.469 "copy": false, 00:19:39.469 "nvme_iov_md": false 00:19:39.469 }, 00:19:39.470 "driver_specific": { 00:19:39.470 "raid": { 00:19:39.470 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:39.470 "strip_size_kb": 64, 00:19:39.470 "state": "online", 00:19:39.470 "raid_level": "raid5f", 00:19:39.470 "superblock": true, 00:19:39.470 "num_base_bdevs": 4, 00:19:39.470 "num_base_bdevs_discovered": 4, 00:19:39.470 "num_base_bdevs_operational": 4, 00:19:39.470 "base_bdevs_list": [ 00:19:39.470 { 00:19:39.470 "name": "BaseBdev1", 00:19:39.470 "uuid": "0874ebb6-07bb-4bd6-83cc-1258411fc9fd", 00:19:39.470 "is_configured": true, 00:19:39.470 "data_offset": 2048, 00:19:39.470 "data_size": 63488 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "name": "BaseBdev2", 00:19:39.470 "uuid": "e1a413aa-3642-49f8-a060-572c92c69a30", 00:19:39.470 "is_configured": true, 00:19:39.470 "data_offset": 2048, 00:19:39.470 "data_size": 63488 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "name": "BaseBdev3", 00:19:39.470 "uuid": "787922bc-33be-4197-bc8a-95a0d442ee55", 00:19:39.470 "is_configured": true, 00:19:39.470 "data_offset": 2048, 00:19:39.470 "data_size": 63488 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "name": "BaseBdev4", 00:19:39.470 "uuid": "4dab8ee2-00ab-48b3-8787-85d68a4f3d7e", 00:19:39.470 "is_configured": true, 00:19:39.470 "data_offset": 2048, 00:19:39.470 "data_size": 63488 00:19:39.470 } 00:19:39.470 ] 00:19:39.470 } 00:19:39.470 } 00:19:39.470 }' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:39.470 BaseBdev2 00:19:39.470 BaseBdev3 00:19:39.470 BaseBdev4' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 20:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.470 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.470 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.470 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.729 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.730 [2024-11-26 20:32:33.075186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.730 "name": "Existed_Raid", 00:19:39.730 "uuid": "2185cbdc-f01c-4e4a-8412-d6a6584679e5", 00:19:39.730 "strip_size_kb": 64, 00:19:39.730 "state": "online", 00:19:39.730 "raid_level": "raid5f", 00:19:39.730 "superblock": true, 00:19:39.730 "num_base_bdevs": 4, 00:19:39.730 "num_base_bdevs_discovered": 3, 00:19:39.730 "num_base_bdevs_operational": 3, 00:19:39.730 "base_bdevs_list": [ 00:19:39.730 { 00:19:39.730 "name": null, 00:19:39.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.730 "is_configured": false, 00:19:39.730 "data_offset": 0, 00:19:39.730 "data_size": 63488 00:19:39.730 }, 00:19:39.730 { 00:19:39.730 "name": "BaseBdev2", 00:19:39.730 "uuid": "e1a413aa-3642-49f8-a060-572c92c69a30", 00:19:39.730 "is_configured": true, 00:19:39.730 "data_offset": 2048, 00:19:39.730 "data_size": 63488 00:19:39.730 }, 00:19:39.730 { 00:19:39.730 "name": "BaseBdev3", 00:19:39.730 "uuid": "787922bc-33be-4197-bc8a-95a0d442ee55", 00:19:39.730 "is_configured": true, 00:19:39.730 "data_offset": 2048, 00:19:39.730 "data_size": 63488 00:19:39.730 }, 00:19:39.730 { 00:19:39.730 "name": "BaseBdev4", 00:19:39.730 "uuid": "4dab8ee2-00ab-48b3-8787-85d68a4f3d7e", 00:19:39.730 "is_configured": true, 00:19:39.730 "data_offset": 2048, 00:19:39.730 "data_size": 63488 00:19:39.730 } 00:19:39.730 ] 00:19:39.730 }' 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.730 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.297 [2024-11-26 20:32:33.700343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:40.297 [2024-11-26 20:32:33.700593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.297 [2024-11-26 20:32:33.813674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.297 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.557 [2024-11-26 20:32:33.877600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.557 20:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.557 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.557 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:40.557 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.557 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:40.557 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.557 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.557 [2024-11-26 20:32:34.049377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:40.557 [2024-11-26 20:32:34.049493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.817 BaseBdev2 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.817 [ 00:19:40.817 { 00:19:40.817 "name": "BaseBdev2", 00:19:40.817 "aliases": [ 00:19:40.817 "2b871d11-8bb9-4ae6-a804-3d166277e28a" 00:19:40.817 ], 00:19:40.817 "product_name": "Malloc disk", 00:19:40.817 "block_size": 512, 00:19:40.817 "num_blocks": 65536, 00:19:40.817 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:40.817 "assigned_rate_limits": { 00:19:40.817 "rw_ios_per_sec": 0, 00:19:40.817 "rw_mbytes_per_sec": 0, 00:19:40.817 "r_mbytes_per_sec": 0, 00:19:40.817 "w_mbytes_per_sec": 0 00:19:40.817 }, 00:19:40.817 "claimed": false, 00:19:40.817 "zoned": false, 00:19:40.817 "supported_io_types": { 00:19:40.817 "read": true, 00:19:40.817 "write": true, 00:19:40.817 "unmap": true, 00:19:40.817 "flush": true, 00:19:40.817 "reset": true, 00:19:40.817 "nvme_admin": false, 00:19:40.817 "nvme_io": false, 00:19:40.817 "nvme_io_md": false, 00:19:40.817 "write_zeroes": true, 00:19:40.817 "zcopy": true, 00:19:40.817 "get_zone_info": false, 00:19:40.817 "zone_management": false, 00:19:40.817 "zone_append": false, 00:19:40.817 "compare": false, 00:19:40.817 "compare_and_write": false, 00:19:40.817 "abort": true, 00:19:40.817 "seek_hole": false, 00:19:40.817 "seek_data": false, 00:19:40.817 "copy": true, 00:19:40.817 "nvme_iov_md": false 00:19:40.817 }, 00:19:40.817 "memory_domains": [ 00:19:40.817 { 00:19:40.817 "dma_device_id": "system", 00:19:40.817 "dma_device_type": 1 00:19:40.817 }, 00:19:40.817 { 00:19:40.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.817 "dma_device_type": 2 00:19:40.817 } 00:19:40.817 ], 00:19:40.817 "driver_specific": {} 00:19:40.817 } 00:19:40.817 ] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:40.817 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.818 BaseBdev3 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.818 [ 00:19:40.818 { 00:19:40.818 "name": "BaseBdev3", 00:19:40.818 "aliases": [ 00:19:40.818 "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0" 00:19:40.818 ], 00:19:40.818 "product_name": "Malloc disk", 00:19:40.818 "block_size": 512, 00:19:40.818 "num_blocks": 65536, 00:19:40.818 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:40.818 "assigned_rate_limits": { 00:19:40.818 "rw_ios_per_sec": 0, 00:19:40.818 "rw_mbytes_per_sec": 0, 00:19:40.818 "r_mbytes_per_sec": 0, 00:19:40.818 "w_mbytes_per_sec": 0 00:19:40.818 }, 00:19:40.818 "claimed": false, 00:19:40.818 "zoned": false, 00:19:40.818 "supported_io_types": { 00:19:40.818 "read": true, 00:19:40.818 "write": true, 00:19:40.818 "unmap": true, 00:19:40.818 "flush": true, 00:19:40.818 "reset": true, 00:19:40.818 "nvme_admin": false, 00:19:40.818 "nvme_io": false, 00:19:40.818 "nvme_io_md": false, 00:19:40.818 "write_zeroes": true, 00:19:40.818 "zcopy": true, 00:19:40.818 "get_zone_info": false, 00:19:40.818 "zone_management": false, 00:19:40.818 "zone_append": false, 00:19:40.818 "compare": false, 00:19:40.818 "compare_and_write": false, 00:19:40.818 "abort": true, 00:19:40.818 "seek_hole": false, 00:19:40.818 "seek_data": false, 00:19:40.818 "copy": true, 00:19:40.818 "nvme_iov_md": false 00:19:40.818 }, 00:19:40.818 "memory_domains": [ 00:19:40.818 { 00:19:40.818 "dma_device_id": "system", 00:19:40.818 "dma_device_type": 1 00:19:40.818 }, 00:19:40.818 { 00:19:40.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.818 "dma_device_type": 2 00:19:40.818 } 00:19:40.818 ], 00:19:40.818 "driver_specific": {} 00:19:40.818 } 00:19:40.818 ] 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.818 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.078 BaseBdev4 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.078 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.078 [ 00:19:41.078 { 00:19:41.078 "name": "BaseBdev4", 00:19:41.078 "aliases": [ 00:19:41.078 "39763e3d-f397-4a05-8b06-0395ca324a07" 00:19:41.078 ], 00:19:41.078 "product_name": "Malloc disk", 00:19:41.078 "block_size": 512, 00:19:41.078 "num_blocks": 65536, 00:19:41.078 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:41.078 "assigned_rate_limits": { 00:19:41.078 "rw_ios_per_sec": 0, 00:19:41.078 "rw_mbytes_per_sec": 0, 00:19:41.078 "r_mbytes_per_sec": 0, 00:19:41.078 "w_mbytes_per_sec": 0 00:19:41.078 }, 00:19:41.078 "claimed": false, 00:19:41.078 "zoned": false, 00:19:41.078 "supported_io_types": { 00:19:41.078 "read": true, 00:19:41.078 "write": true, 00:19:41.078 "unmap": true, 00:19:41.078 "flush": true, 00:19:41.078 "reset": true, 00:19:41.078 "nvme_admin": false, 00:19:41.078 "nvme_io": false, 00:19:41.078 "nvme_io_md": false, 00:19:41.078 "write_zeroes": true, 00:19:41.079 "zcopy": true, 00:19:41.079 "get_zone_info": false, 00:19:41.079 "zone_management": false, 00:19:41.079 "zone_append": false, 00:19:41.079 "compare": false, 00:19:41.079 "compare_and_write": false, 00:19:41.079 "abort": true, 00:19:41.079 "seek_hole": false, 00:19:41.079 "seek_data": false, 00:19:41.079 "copy": true, 00:19:41.079 "nvme_iov_md": false 00:19:41.079 }, 00:19:41.079 "memory_domains": [ 00:19:41.079 { 00:19:41.079 "dma_device_id": "system", 00:19:41.079 "dma_device_type": 1 00:19:41.079 }, 00:19:41.079 { 00:19:41.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.079 "dma_device_type": 2 00:19:41.079 } 00:19:41.079 ], 00:19:41.079 "driver_specific": {} 00:19:41.079 } 00:19:41.079 ] 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.079 [2024-11-26 20:32:34.410030] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.079 [2024-11-26 20:32:34.410123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.079 [2024-11-26 20:32:34.410180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.079 [2024-11-26 20:32:34.412074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:41.079 [2024-11-26 20:32:34.412202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.079 "name": "Existed_Raid", 00:19:41.079 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:41.079 "strip_size_kb": 64, 00:19:41.079 "state": "configuring", 00:19:41.079 "raid_level": "raid5f", 00:19:41.079 "superblock": true, 00:19:41.079 "num_base_bdevs": 4, 00:19:41.079 "num_base_bdevs_discovered": 3, 00:19:41.079 "num_base_bdevs_operational": 4, 00:19:41.079 "base_bdevs_list": [ 00:19:41.079 { 00:19:41.079 "name": "BaseBdev1", 00:19:41.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.079 "is_configured": false, 00:19:41.079 "data_offset": 0, 00:19:41.079 "data_size": 0 00:19:41.079 }, 00:19:41.079 { 00:19:41.079 "name": "BaseBdev2", 00:19:41.079 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:41.079 "is_configured": true, 00:19:41.079 "data_offset": 2048, 00:19:41.079 "data_size": 63488 00:19:41.079 }, 00:19:41.079 { 00:19:41.079 "name": "BaseBdev3", 00:19:41.079 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:41.079 "is_configured": true, 00:19:41.079 "data_offset": 2048, 00:19:41.079 "data_size": 63488 00:19:41.079 }, 00:19:41.079 { 00:19:41.079 "name": "BaseBdev4", 00:19:41.079 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:41.079 "is_configured": true, 00:19:41.079 "data_offset": 2048, 00:19:41.079 "data_size": 63488 00:19:41.079 } 00:19:41.079 ] 00:19:41.079 }' 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.079 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.339 [2024-11-26 20:32:34.845268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.339 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.599 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.599 "name": "Existed_Raid", 00:19:41.599 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:41.599 "strip_size_kb": 64, 00:19:41.599 "state": "configuring", 00:19:41.599 "raid_level": "raid5f", 00:19:41.599 "superblock": true, 00:19:41.599 "num_base_bdevs": 4, 00:19:41.599 "num_base_bdevs_discovered": 2, 00:19:41.599 "num_base_bdevs_operational": 4, 00:19:41.599 "base_bdevs_list": [ 00:19:41.599 { 00:19:41.599 "name": "BaseBdev1", 00:19:41.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.599 "is_configured": false, 00:19:41.599 "data_offset": 0, 00:19:41.599 "data_size": 0 00:19:41.599 }, 00:19:41.599 { 00:19:41.599 "name": null, 00:19:41.599 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:41.599 "is_configured": false, 00:19:41.599 "data_offset": 0, 00:19:41.599 "data_size": 63488 00:19:41.599 }, 00:19:41.599 { 00:19:41.599 "name": "BaseBdev3", 00:19:41.599 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:41.599 "is_configured": true, 00:19:41.599 "data_offset": 2048, 00:19:41.599 "data_size": 63488 00:19:41.599 }, 00:19:41.599 { 00:19:41.599 "name": "BaseBdev4", 00:19:41.599 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:41.599 "is_configured": true, 00:19:41.599 "data_offset": 2048, 00:19:41.599 "data_size": 63488 00:19:41.599 } 00:19:41.599 ] 00:19:41.599 }' 00:19:41.599 20:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.599 20:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.859 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.859 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.859 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.859 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:41.859 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.860 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:41.860 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:41.860 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.860 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.120 [2024-11-26 20:32:35.422217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.120 BaseBdev1 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.120 [ 00:19:42.120 { 00:19:42.120 "name": "BaseBdev1", 00:19:42.120 "aliases": [ 00:19:42.120 "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2" 00:19:42.120 ], 00:19:42.120 "product_name": "Malloc disk", 00:19:42.120 "block_size": 512, 00:19:42.120 "num_blocks": 65536, 00:19:42.120 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:42.120 "assigned_rate_limits": { 00:19:42.120 "rw_ios_per_sec": 0, 00:19:42.120 "rw_mbytes_per_sec": 0, 00:19:42.120 "r_mbytes_per_sec": 0, 00:19:42.120 "w_mbytes_per_sec": 0 00:19:42.120 }, 00:19:42.120 "claimed": true, 00:19:42.120 "claim_type": "exclusive_write", 00:19:42.120 "zoned": false, 00:19:42.120 "supported_io_types": { 00:19:42.120 "read": true, 00:19:42.120 "write": true, 00:19:42.120 "unmap": true, 00:19:42.120 "flush": true, 00:19:42.120 "reset": true, 00:19:42.120 "nvme_admin": false, 00:19:42.120 "nvme_io": false, 00:19:42.120 "nvme_io_md": false, 00:19:42.120 "write_zeroes": true, 00:19:42.120 "zcopy": true, 00:19:42.120 "get_zone_info": false, 00:19:42.120 "zone_management": false, 00:19:42.120 "zone_append": false, 00:19:42.120 "compare": false, 00:19:42.120 "compare_and_write": false, 00:19:42.120 "abort": true, 00:19:42.120 "seek_hole": false, 00:19:42.120 "seek_data": false, 00:19:42.120 "copy": true, 00:19:42.120 "nvme_iov_md": false 00:19:42.120 }, 00:19:42.120 "memory_domains": [ 00:19:42.120 { 00:19:42.120 "dma_device_id": "system", 00:19:42.120 "dma_device_type": 1 00:19:42.120 }, 00:19:42.120 { 00:19:42.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.120 "dma_device_type": 2 00:19:42.120 } 00:19:42.120 ], 00:19:42.120 "driver_specific": {} 00:19:42.120 } 00:19:42.120 ] 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.120 "name": "Existed_Raid", 00:19:42.120 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:42.120 "strip_size_kb": 64, 00:19:42.120 "state": "configuring", 00:19:42.120 "raid_level": "raid5f", 00:19:42.120 "superblock": true, 00:19:42.120 "num_base_bdevs": 4, 00:19:42.120 "num_base_bdevs_discovered": 3, 00:19:42.120 "num_base_bdevs_operational": 4, 00:19:42.120 "base_bdevs_list": [ 00:19:42.120 { 00:19:42.120 "name": "BaseBdev1", 00:19:42.120 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:42.120 "is_configured": true, 00:19:42.120 "data_offset": 2048, 00:19:42.120 "data_size": 63488 00:19:42.120 }, 00:19:42.120 { 00:19:42.120 "name": null, 00:19:42.120 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:42.120 "is_configured": false, 00:19:42.120 "data_offset": 0, 00:19:42.120 "data_size": 63488 00:19:42.120 }, 00:19:42.120 { 00:19:42.120 "name": "BaseBdev3", 00:19:42.120 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:42.120 "is_configured": true, 00:19:42.120 "data_offset": 2048, 00:19:42.120 "data_size": 63488 00:19:42.120 }, 00:19:42.120 { 00:19:42.120 "name": "BaseBdev4", 00:19:42.120 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:42.120 "is_configured": true, 00:19:42.120 "data_offset": 2048, 00:19:42.120 "data_size": 63488 00:19:42.120 } 00:19:42.120 ] 00:19:42.120 }' 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.120 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.689 [2024-11-26 20:32:35.993405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.689 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.690 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.690 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.690 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.690 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.690 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.690 20:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.690 "name": "Existed_Raid", 00:19:42.690 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:42.690 "strip_size_kb": 64, 00:19:42.690 "state": "configuring", 00:19:42.690 "raid_level": "raid5f", 00:19:42.690 "superblock": true, 00:19:42.690 "num_base_bdevs": 4, 00:19:42.690 "num_base_bdevs_discovered": 2, 00:19:42.690 "num_base_bdevs_operational": 4, 00:19:42.690 "base_bdevs_list": [ 00:19:42.690 { 00:19:42.690 "name": "BaseBdev1", 00:19:42.690 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:42.690 "is_configured": true, 00:19:42.690 "data_offset": 2048, 00:19:42.690 "data_size": 63488 00:19:42.690 }, 00:19:42.690 { 00:19:42.690 "name": null, 00:19:42.690 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:42.690 "is_configured": false, 00:19:42.690 "data_offset": 0, 00:19:42.690 "data_size": 63488 00:19:42.690 }, 00:19:42.690 { 00:19:42.690 "name": null, 00:19:42.690 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:42.690 "is_configured": false, 00:19:42.690 "data_offset": 0, 00:19:42.690 "data_size": 63488 00:19:42.690 }, 00:19:42.690 { 00:19:42.690 "name": "BaseBdev4", 00:19:42.690 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:42.690 "is_configured": true, 00:19:42.690 "data_offset": 2048, 00:19:42.690 "data_size": 63488 00:19:42.690 } 00:19:42.690 ] 00:19:42.690 }' 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.690 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.949 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.949 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:42.949 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.949 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.949 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.209 [2024-11-26 20:32:36.528544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.209 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.209 "name": "Existed_Raid", 00:19:43.209 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:43.209 "strip_size_kb": 64, 00:19:43.209 "state": "configuring", 00:19:43.209 "raid_level": "raid5f", 00:19:43.209 "superblock": true, 00:19:43.209 "num_base_bdevs": 4, 00:19:43.209 "num_base_bdevs_discovered": 3, 00:19:43.209 "num_base_bdevs_operational": 4, 00:19:43.210 "base_bdevs_list": [ 00:19:43.210 { 00:19:43.210 "name": "BaseBdev1", 00:19:43.210 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:43.210 "is_configured": true, 00:19:43.210 "data_offset": 2048, 00:19:43.210 "data_size": 63488 00:19:43.210 }, 00:19:43.210 { 00:19:43.210 "name": null, 00:19:43.210 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:43.210 "is_configured": false, 00:19:43.210 "data_offset": 0, 00:19:43.210 "data_size": 63488 00:19:43.210 }, 00:19:43.210 { 00:19:43.210 "name": "BaseBdev3", 00:19:43.210 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:43.210 "is_configured": true, 00:19:43.210 "data_offset": 2048, 00:19:43.210 "data_size": 63488 00:19:43.210 }, 00:19:43.210 { 00:19:43.210 "name": "BaseBdev4", 00:19:43.210 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:43.210 "is_configured": true, 00:19:43.210 "data_offset": 2048, 00:19:43.210 "data_size": 63488 00:19:43.210 } 00:19:43.210 ] 00:19:43.210 }' 00:19:43.210 20:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.210 20:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.780 [2024-11-26 20:32:37.083675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.780 "name": "Existed_Raid", 00:19:43.780 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:43.780 "strip_size_kb": 64, 00:19:43.780 "state": "configuring", 00:19:43.780 "raid_level": "raid5f", 00:19:43.780 "superblock": true, 00:19:43.780 "num_base_bdevs": 4, 00:19:43.780 "num_base_bdevs_discovered": 2, 00:19:43.780 "num_base_bdevs_operational": 4, 00:19:43.780 "base_bdevs_list": [ 00:19:43.780 { 00:19:43.780 "name": null, 00:19:43.780 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:43.780 "is_configured": false, 00:19:43.780 "data_offset": 0, 00:19:43.780 "data_size": 63488 00:19:43.780 }, 00:19:43.780 { 00:19:43.780 "name": null, 00:19:43.780 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:43.780 "is_configured": false, 00:19:43.780 "data_offset": 0, 00:19:43.780 "data_size": 63488 00:19:43.780 }, 00:19:43.780 { 00:19:43.780 "name": "BaseBdev3", 00:19:43.780 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:43.780 "is_configured": true, 00:19:43.780 "data_offset": 2048, 00:19:43.780 "data_size": 63488 00:19:43.780 }, 00:19:43.780 { 00:19:43.780 "name": "BaseBdev4", 00:19:43.780 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:43.780 "is_configured": true, 00:19:43.780 "data_offset": 2048, 00:19:43.780 "data_size": 63488 00:19:43.780 } 00:19:43.780 ] 00:19:43.780 }' 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.780 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.350 [2024-11-26 20:32:37.689563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.350 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.351 "name": "Existed_Raid", 00:19:44.351 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:44.351 "strip_size_kb": 64, 00:19:44.351 "state": "configuring", 00:19:44.351 "raid_level": "raid5f", 00:19:44.351 "superblock": true, 00:19:44.351 "num_base_bdevs": 4, 00:19:44.351 "num_base_bdevs_discovered": 3, 00:19:44.351 "num_base_bdevs_operational": 4, 00:19:44.351 "base_bdevs_list": [ 00:19:44.351 { 00:19:44.351 "name": null, 00:19:44.351 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:44.351 "is_configured": false, 00:19:44.351 "data_offset": 0, 00:19:44.351 "data_size": 63488 00:19:44.351 }, 00:19:44.351 { 00:19:44.351 "name": "BaseBdev2", 00:19:44.351 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:44.351 "is_configured": true, 00:19:44.351 "data_offset": 2048, 00:19:44.351 "data_size": 63488 00:19:44.351 }, 00:19:44.351 { 00:19:44.351 "name": "BaseBdev3", 00:19:44.351 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:44.351 "is_configured": true, 00:19:44.351 "data_offset": 2048, 00:19:44.351 "data_size": 63488 00:19:44.351 }, 00:19:44.351 { 00:19:44.351 "name": "BaseBdev4", 00:19:44.351 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:44.351 "is_configured": true, 00:19:44.351 "data_offset": 2048, 00:19:44.351 "data_size": 63488 00:19:44.351 } 00:19:44.351 ] 00:19:44.351 }' 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.351 20:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.712 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:44.712 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.712 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.712 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.987 [2024-11-26 20:32:38.308366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:44.987 [2024-11-26 20:32:38.308643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:44.987 [2024-11-26 20:32:38.308675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:44.987 [2024-11-26 20:32:38.308989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:44.987 NewBaseBdev 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:44.987 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.988 [2024-11-26 20:32:38.317515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:44.988 [2024-11-26 20:32:38.317595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:44.988 [2024-11-26 20:32:38.317944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.988 [ 00:19:44.988 { 00:19:44.988 "name": "NewBaseBdev", 00:19:44.988 "aliases": [ 00:19:44.988 "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2" 00:19:44.988 ], 00:19:44.988 "product_name": "Malloc disk", 00:19:44.988 "block_size": 512, 00:19:44.988 "num_blocks": 65536, 00:19:44.988 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:44.988 "assigned_rate_limits": { 00:19:44.988 "rw_ios_per_sec": 0, 00:19:44.988 "rw_mbytes_per_sec": 0, 00:19:44.988 "r_mbytes_per_sec": 0, 00:19:44.988 "w_mbytes_per_sec": 0 00:19:44.988 }, 00:19:44.988 "claimed": true, 00:19:44.988 "claim_type": "exclusive_write", 00:19:44.988 "zoned": false, 00:19:44.988 "supported_io_types": { 00:19:44.988 "read": true, 00:19:44.988 "write": true, 00:19:44.988 "unmap": true, 00:19:44.988 "flush": true, 00:19:44.988 "reset": true, 00:19:44.988 "nvme_admin": false, 00:19:44.988 "nvme_io": false, 00:19:44.988 "nvme_io_md": false, 00:19:44.988 "write_zeroes": true, 00:19:44.988 "zcopy": true, 00:19:44.988 "get_zone_info": false, 00:19:44.988 "zone_management": false, 00:19:44.988 "zone_append": false, 00:19:44.988 "compare": false, 00:19:44.988 "compare_and_write": false, 00:19:44.988 "abort": true, 00:19:44.988 "seek_hole": false, 00:19:44.988 "seek_data": false, 00:19:44.988 "copy": true, 00:19:44.988 "nvme_iov_md": false 00:19:44.988 }, 00:19:44.988 "memory_domains": [ 00:19:44.988 { 00:19:44.988 "dma_device_id": "system", 00:19:44.988 "dma_device_type": 1 00:19:44.988 }, 00:19:44.988 { 00:19:44.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.988 "dma_device_type": 2 00:19:44.988 } 00:19:44.988 ], 00:19:44.988 "driver_specific": {} 00:19:44.988 } 00:19:44.988 ] 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.988 "name": "Existed_Raid", 00:19:44.988 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:44.988 "strip_size_kb": 64, 00:19:44.988 "state": "online", 00:19:44.988 "raid_level": "raid5f", 00:19:44.988 "superblock": true, 00:19:44.988 "num_base_bdevs": 4, 00:19:44.988 "num_base_bdevs_discovered": 4, 00:19:44.988 "num_base_bdevs_operational": 4, 00:19:44.988 "base_bdevs_list": [ 00:19:44.988 { 00:19:44.988 "name": "NewBaseBdev", 00:19:44.988 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:44.988 "is_configured": true, 00:19:44.988 "data_offset": 2048, 00:19:44.988 "data_size": 63488 00:19:44.988 }, 00:19:44.988 { 00:19:44.988 "name": "BaseBdev2", 00:19:44.988 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:44.988 "is_configured": true, 00:19:44.988 "data_offset": 2048, 00:19:44.988 "data_size": 63488 00:19:44.988 }, 00:19:44.988 { 00:19:44.988 "name": "BaseBdev3", 00:19:44.988 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:44.988 "is_configured": true, 00:19:44.988 "data_offset": 2048, 00:19:44.988 "data_size": 63488 00:19:44.988 }, 00:19:44.988 { 00:19:44.988 "name": "BaseBdev4", 00:19:44.988 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:44.988 "is_configured": true, 00:19:44.988 "data_offset": 2048, 00:19:44.988 "data_size": 63488 00:19:44.988 } 00:19:44.988 ] 00:19:44.988 }' 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.988 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:45.558 [2024-11-26 20:32:38.843059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.558 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.559 "name": "Existed_Raid", 00:19:45.559 "aliases": [ 00:19:45.559 "7516ffc5-7007-429a-92d5-3958e5ed1b45" 00:19:45.559 ], 00:19:45.559 "product_name": "Raid Volume", 00:19:45.559 "block_size": 512, 00:19:45.559 "num_blocks": 190464, 00:19:45.559 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:45.559 "assigned_rate_limits": { 00:19:45.559 "rw_ios_per_sec": 0, 00:19:45.559 "rw_mbytes_per_sec": 0, 00:19:45.559 "r_mbytes_per_sec": 0, 00:19:45.559 "w_mbytes_per_sec": 0 00:19:45.559 }, 00:19:45.559 "claimed": false, 00:19:45.559 "zoned": false, 00:19:45.559 "supported_io_types": { 00:19:45.559 "read": true, 00:19:45.559 "write": true, 00:19:45.559 "unmap": false, 00:19:45.559 "flush": false, 00:19:45.559 "reset": true, 00:19:45.559 "nvme_admin": false, 00:19:45.559 "nvme_io": false, 00:19:45.559 "nvme_io_md": false, 00:19:45.559 "write_zeroes": true, 00:19:45.559 "zcopy": false, 00:19:45.559 "get_zone_info": false, 00:19:45.559 "zone_management": false, 00:19:45.559 "zone_append": false, 00:19:45.559 "compare": false, 00:19:45.559 "compare_and_write": false, 00:19:45.559 "abort": false, 00:19:45.559 "seek_hole": false, 00:19:45.559 "seek_data": false, 00:19:45.559 "copy": false, 00:19:45.559 "nvme_iov_md": false 00:19:45.559 }, 00:19:45.559 "driver_specific": { 00:19:45.559 "raid": { 00:19:45.559 "uuid": "7516ffc5-7007-429a-92d5-3958e5ed1b45", 00:19:45.559 "strip_size_kb": 64, 00:19:45.559 "state": "online", 00:19:45.559 "raid_level": "raid5f", 00:19:45.559 "superblock": true, 00:19:45.559 "num_base_bdevs": 4, 00:19:45.559 "num_base_bdevs_discovered": 4, 00:19:45.559 "num_base_bdevs_operational": 4, 00:19:45.559 "base_bdevs_list": [ 00:19:45.559 { 00:19:45.559 "name": "NewBaseBdev", 00:19:45.559 "uuid": "5fd7d03e-092f-4a80-9e2b-7dcc0dcba0f2", 00:19:45.559 "is_configured": true, 00:19:45.559 "data_offset": 2048, 00:19:45.559 "data_size": 63488 00:19:45.559 }, 00:19:45.559 { 00:19:45.559 "name": "BaseBdev2", 00:19:45.559 "uuid": "2b871d11-8bb9-4ae6-a804-3d166277e28a", 00:19:45.559 "is_configured": true, 00:19:45.559 "data_offset": 2048, 00:19:45.559 "data_size": 63488 00:19:45.559 }, 00:19:45.559 { 00:19:45.559 "name": "BaseBdev3", 00:19:45.559 "uuid": "afdbb55d-cd37-4709-a0cf-5fd00bacbfd0", 00:19:45.559 "is_configured": true, 00:19:45.559 "data_offset": 2048, 00:19:45.559 "data_size": 63488 00:19:45.559 }, 00:19:45.559 { 00:19:45.559 "name": "BaseBdev4", 00:19:45.559 "uuid": "39763e3d-f397-4a05-8b06-0395ca324a07", 00:19:45.559 "is_configured": true, 00:19:45.559 "data_offset": 2048, 00:19:45.559 "data_size": 63488 00:19:45.559 } 00:19:45.559 ] 00:19:45.559 } 00:19:45.559 } 00:19:45.559 }' 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:45.559 BaseBdev2 00:19:45.559 BaseBdev3 00:19:45.559 BaseBdev4' 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.559 20:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.559 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.818 [2024-11-26 20:32:39.146312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.818 [2024-11-26 20:32:39.146390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.818 [2024-11-26 20:32:39.146502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.818 [2024-11-26 20:32:39.146825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.818 [2024-11-26 20:32:39.146880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83910 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83910 ']' 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83910 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83910 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83910' 00:19:45.818 killing process with pid 83910 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83910 00:19:45.818 [2024-11-26 20:32:39.189521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.818 20:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83910 00:19:46.078 [2024-11-26 20:32:39.631680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.457 20:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:47.457 00:19:47.457 real 0m12.305s 00:19:47.457 user 0m19.511s 00:19:47.457 sys 0m2.187s 00:19:47.457 20:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.457 ************************************ 00:19:47.457 END TEST raid5f_state_function_test_sb 00:19:47.457 ************************************ 00:19:47.457 20:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.457 20:32:40 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:47.457 20:32:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:47.457 20:32:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.457 20:32:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.457 ************************************ 00:19:47.457 START TEST raid5f_superblock_test 00:19:47.457 ************************************ 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84587 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84587 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84587 ']' 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.457 20:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.717 [2024-11-26 20:32:41.069110] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:19:47.717 [2024-11-26 20:32:41.069236] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84587 ] 00:19:47.717 [2024-11-26 20:32:41.243401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.975 [2024-11-26 20:32:41.380744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.234 [2024-11-26 20:32:41.603686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.234 [2024-11-26 20:32:41.603725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.494 malloc1 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.494 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.494 [2024-11-26 20:32:41.990060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:48.494 [2024-11-26 20:32:41.990180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.494 [2024-11-26 20:32:41.990208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:48.494 [2024-11-26 20:32:41.990219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.494 [2024-11-26 20:32:41.992512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.494 [2024-11-26 20:32:41.992557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:48.494 pt1 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.495 20:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.495 malloc2 00:19:48.495 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.495 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.495 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.495 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.495 [2024-11-26 20:32:42.044464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.495 [2024-11-26 20:32:42.044575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.495 [2024-11-26 20:32:42.044609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:48.495 [2024-11-26 20:32:42.044620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.495 [2024-11-26 20:32:42.047000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.495 [2024-11-26 20:32:42.047040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.755 pt2 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.755 malloc3 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.755 [2024-11-26 20:32:42.110622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:48.755 [2024-11-26 20:32:42.110715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.755 [2024-11-26 20:32:42.110757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:48.755 [2024-11-26 20:32:42.110787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.755 [2024-11-26 20:32:42.113034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.755 [2024-11-26 20:32:42.113110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:48.755 pt3 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.755 malloc4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.755 [2024-11-26 20:32:42.166541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:48.755 [2024-11-26 20:32:42.166604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.755 [2024-11-26 20:32:42.166628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:48.755 [2024-11-26 20:32:42.166638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.755 [2024-11-26 20:32:42.168884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.755 [2024-11-26 20:32:42.168921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:48.755 pt4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.755 [2024-11-26 20:32:42.178549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:48.755 [2024-11-26 20:32:42.180429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.755 [2024-11-26 20:32:42.180578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:48.755 [2024-11-26 20:32:42.180638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:48.755 [2024-11-26 20:32:42.180853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:48.755 [2024-11-26 20:32:42.180870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:48.755 [2024-11-26 20:32:42.181141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:48.755 [2024-11-26 20:32:42.189594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:48.755 [2024-11-26 20:32:42.189658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:48.755 [2024-11-26 20:32:42.189893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.755 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.756 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.756 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.756 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.756 "name": "raid_bdev1", 00:19:48.756 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:48.756 "strip_size_kb": 64, 00:19:48.756 "state": "online", 00:19:48.756 "raid_level": "raid5f", 00:19:48.756 "superblock": true, 00:19:48.756 "num_base_bdevs": 4, 00:19:48.756 "num_base_bdevs_discovered": 4, 00:19:48.756 "num_base_bdevs_operational": 4, 00:19:48.756 "base_bdevs_list": [ 00:19:48.756 { 00:19:48.756 "name": "pt1", 00:19:48.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.756 "is_configured": true, 00:19:48.756 "data_offset": 2048, 00:19:48.756 "data_size": 63488 00:19:48.756 }, 00:19:48.756 { 00:19:48.756 "name": "pt2", 00:19:48.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.756 "is_configured": true, 00:19:48.756 "data_offset": 2048, 00:19:48.756 "data_size": 63488 00:19:48.756 }, 00:19:48.756 { 00:19:48.756 "name": "pt3", 00:19:48.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:48.756 "is_configured": true, 00:19:48.756 "data_offset": 2048, 00:19:48.756 "data_size": 63488 00:19:48.756 }, 00:19:48.756 { 00:19:48.756 "name": "pt4", 00:19:48.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:48.756 "is_configured": true, 00:19:48.756 "data_offset": 2048, 00:19:48.756 "data_size": 63488 00:19:48.756 } 00:19:48.756 ] 00:19:48.756 }' 00:19:48.756 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.756 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.323 [2024-11-26 20:32:42.674931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.323 "name": "raid_bdev1", 00:19:49.323 "aliases": [ 00:19:49.323 "7291c42b-cd68-4099-8518-017f9ff74563" 00:19:49.323 ], 00:19:49.323 "product_name": "Raid Volume", 00:19:49.323 "block_size": 512, 00:19:49.323 "num_blocks": 190464, 00:19:49.323 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:49.323 "assigned_rate_limits": { 00:19:49.323 "rw_ios_per_sec": 0, 00:19:49.323 "rw_mbytes_per_sec": 0, 00:19:49.323 "r_mbytes_per_sec": 0, 00:19:49.323 "w_mbytes_per_sec": 0 00:19:49.323 }, 00:19:49.323 "claimed": false, 00:19:49.323 "zoned": false, 00:19:49.323 "supported_io_types": { 00:19:49.323 "read": true, 00:19:49.323 "write": true, 00:19:49.323 "unmap": false, 00:19:49.323 "flush": false, 00:19:49.323 "reset": true, 00:19:49.323 "nvme_admin": false, 00:19:49.323 "nvme_io": false, 00:19:49.323 "nvme_io_md": false, 00:19:49.323 "write_zeroes": true, 00:19:49.323 "zcopy": false, 00:19:49.323 "get_zone_info": false, 00:19:49.323 "zone_management": false, 00:19:49.323 "zone_append": false, 00:19:49.323 "compare": false, 00:19:49.323 "compare_and_write": false, 00:19:49.323 "abort": false, 00:19:49.323 "seek_hole": false, 00:19:49.323 "seek_data": false, 00:19:49.323 "copy": false, 00:19:49.323 "nvme_iov_md": false 00:19:49.323 }, 00:19:49.323 "driver_specific": { 00:19:49.323 "raid": { 00:19:49.323 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:49.323 "strip_size_kb": 64, 00:19:49.323 "state": "online", 00:19:49.323 "raid_level": "raid5f", 00:19:49.323 "superblock": true, 00:19:49.323 "num_base_bdevs": 4, 00:19:49.323 "num_base_bdevs_discovered": 4, 00:19:49.323 "num_base_bdevs_operational": 4, 00:19:49.323 "base_bdevs_list": [ 00:19:49.323 { 00:19:49.323 "name": "pt1", 00:19:49.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.323 "is_configured": true, 00:19:49.323 "data_offset": 2048, 00:19:49.323 "data_size": 63488 00:19:49.323 }, 00:19:49.323 { 00:19:49.323 "name": "pt2", 00:19:49.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.323 "is_configured": true, 00:19:49.323 "data_offset": 2048, 00:19:49.323 "data_size": 63488 00:19:49.323 }, 00:19:49.323 { 00:19:49.323 "name": "pt3", 00:19:49.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:49.323 "is_configured": true, 00:19:49.323 "data_offset": 2048, 00:19:49.323 "data_size": 63488 00:19:49.323 }, 00:19:49.323 { 00:19:49.323 "name": "pt4", 00:19:49.323 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:49.323 "is_configured": true, 00:19:49.323 "data_offset": 2048, 00:19:49.323 "data_size": 63488 00:19:49.323 } 00:19:49.323 ] 00:19:49.323 } 00:19:49.323 } 00:19:49.323 }' 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:49.323 pt2 00:19:49.323 pt3 00:19:49.323 pt4' 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.323 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.324 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.584 20:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:49.584 [2024-11-26 20:32:43.010478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7291c42b-cd68-4099-8518-017f9ff74563 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7291c42b-cd68-4099-8518-017f9ff74563 ']' 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 [2024-11-26 20:32:43.058170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.584 [2024-11-26 20:32:43.058202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.584 [2024-11-26 20:32:43.058319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.584 [2024-11-26 20:32:43.058414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.584 [2024-11-26 20:32:43.058430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.584 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.844 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.844 [2024-11-26 20:32:43.221929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:49.844 [2024-11-26 20:32:43.224042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:49.844 [2024-11-26 20:32:43.224095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:49.844 [2024-11-26 20:32:43.224133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:49.845 [2024-11-26 20:32:43.224189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:49.845 [2024-11-26 20:32:43.224259] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:49.845 [2024-11-26 20:32:43.224283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:49.845 [2024-11-26 20:32:43.224305] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:49.845 [2024-11-26 20:32:43.224320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.845 [2024-11-26 20:32:43.224333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:49.845 request: 00:19:49.845 { 00:19:49.845 "name": "raid_bdev1", 00:19:49.845 "raid_level": "raid5f", 00:19:49.845 "base_bdevs": [ 00:19:49.845 "malloc1", 00:19:49.845 "malloc2", 00:19:49.845 "malloc3", 00:19:49.845 "malloc4" 00:19:49.845 ], 00:19:49.845 "strip_size_kb": 64, 00:19:49.845 "superblock": false, 00:19:49.845 "method": "bdev_raid_create", 00:19:49.845 "req_id": 1 00:19:49.845 } 00:19:49.845 Got JSON-RPC error response 00:19:49.845 response: 00:19:49.845 { 00:19:49.845 "code": -17, 00:19:49.845 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:49.845 } 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.845 [2024-11-26 20:32:43.285768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:49.845 [2024-11-26 20:32:43.285901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.845 [2024-11-26 20:32:43.285956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:49.845 [2024-11-26 20:32:43.286001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.845 [2024-11-26 20:32:43.288563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.845 [2024-11-26 20:32:43.288649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:49.845 [2024-11-26 20:32:43.288799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:49.845 [2024-11-26 20:32:43.288896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:49.845 pt1 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.845 "name": "raid_bdev1", 00:19:49.845 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:49.845 "strip_size_kb": 64, 00:19:49.845 "state": "configuring", 00:19:49.845 "raid_level": "raid5f", 00:19:49.845 "superblock": true, 00:19:49.845 "num_base_bdevs": 4, 00:19:49.845 "num_base_bdevs_discovered": 1, 00:19:49.845 "num_base_bdevs_operational": 4, 00:19:49.845 "base_bdevs_list": [ 00:19:49.845 { 00:19:49.845 "name": "pt1", 00:19:49.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.845 "is_configured": true, 00:19:49.845 "data_offset": 2048, 00:19:49.845 "data_size": 63488 00:19:49.845 }, 00:19:49.845 { 00:19:49.845 "name": null, 00:19:49.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.845 "is_configured": false, 00:19:49.845 "data_offset": 2048, 00:19:49.845 "data_size": 63488 00:19:49.845 }, 00:19:49.845 { 00:19:49.845 "name": null, 00:19:49.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:49.845 "is_configured": false, 00:19:49.845 "data_offset": 2048, 00:19:49.845 "data_size": 63488 00:19:49.845 }, 00:19:49.845 { 00:19:49.845 "name": null, 00:19:49.845 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:49.845 "is_configured": false, 00:19:49.845 "data_offset": 2048, 00:19:49.845 "data_size": 63488 00:19:49.845 } 00:19:49.845 ] 00:19:49.845 }' 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.845 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.412 [2024-11-26 20:32:43.725095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:50.412 [2024-11-26 20:32:43.725255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.412 [2024-11-26 20:32:43.725311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:50.412 [2024-11-26 20:32:43.725350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.412 [2024-11-26 20:32:43.725889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.412 [2024-11-26 20:32:43.725965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:50.412 [2024-11-26 20:32:43.726090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:50.412 [2024-11-26 20:32:43.726149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:50.412 pt2 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.412 [2024-11-26 20:32:43.737069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:50.412 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.413 "name": "raid_bdev1", 00:19:50.413 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:50.413 "strip_size_kb": 64, 00:19:50.413 "state": "configuring", 00:19:50.413 "raid_level": "raid5f", 00:19:50.413 "superblock": true, 00:19:50.413 "num_base_bdevs": 4, 00:19:50.413 "num_base_bdevs_discovered": 1, 00:19:50.413 "num_base_bdevs_operational": 4, 00:19:50.413 "base_bdevs_list": [ 00:19:50.413 { 00:19:50.413 "name": "pt1", 00:19:50.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:50.413 "is_configured": true, 00:19:50.413 "data_offset": 2048, 00:19:50.413 "data_size": 63488 00:19:50.413 }, 00:19:50.413 { 00:19:50.413 "name": null, 00:19:50.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.413 "is_configured": false, 00:19:50.413 "data_offset": 0, 00:19:50.413 "data_size": 63488 00:19:50.413 }, 00:19:50.413 { 00:19:50.413 "name": null, 00:19:50.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:50.413 "is_configured": false, 00:19:50.413 "data_offset": 2048, 00:19:50.413 "data_size": 63488 00:19:50.413 }, 00:19:50.413 { 00:19:50.413 "name": null, 00:19:50.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:50.413 "is_configured": false, 00:19:50.413 "data_offset": 2048, 00:19:50.413 "data_size": 63488 00:19:50.413 } 00:19:50.413 ] 00:19:50.413 }' 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.413 20:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.672 [2024-11-26 20:32:44.192311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:50.672 [2024-11-26 20:32:44.192439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.672 [2024-11-26 20:32:44.192484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:50.672 [2024-11-26 20:32:44.192516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.672 [2024-11-26 20:32:44.193039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.672 [2024-11-26 20:32:44.193101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:50.672 [2024-11-26 20:32:44.193225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:50.672 [2024-11-26 20:32:44.193318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:50.672 pt2 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.672 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.672 [2024-11-26 20:32:44.204254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:50.672 [2024-11-26 20:32:44.204353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.672 [2024-11-26 20:32:44.204400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:50.673 [2024-11-26 20:32:44.204444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.673 [2024-11-26 20:32:44.204892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.673 [2024-11-26 20:32:44.204962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:50.673 [2024-11-26 20:32:44.205078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:50.673 [2024-11-26 20:32:44.205139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:50.673 pt3 00:19:50.673 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.673 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:50.673 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:50.673 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:50.673 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.673 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.673 [2024-11-26 20:32:44.216193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:50.673 [2024-11-26 20:32:44.216285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.673 [2024-11-26 20:32:44.216320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:50.673 [2024-11-26 20:32:44.216365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.673 [2024-11-26 20:32:44.216794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.673 [2024-11-26 20:32:44.216855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:50.673 [2024-11-26 20:32:44.216985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:50.673 [2024-11-26 20:32:44.217045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:50.673 [2024-11-26 20:32:44.217255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:50.673 [2024-11-26 20:32:44.217299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:50.673 [2024-11-26 20:32:44.217593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:50.673 [2024-11-26 20:32:44.225341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:50.673 [2024-11-26 20:32:44.225403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:50.673 [2024-11-26 20:32:44.225649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.673 pt4 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.932 "name": "raid_bdev1", 00:19:50.932 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:50.932 "strip_size_kb": 64, 00:19:50.932 "state": "online", 00:19:50.932 "raid_level": "raid5f", 00:19:50.932 "superblock": true, 00:19:50.932 "num_base_bdevs": 4, 00:19:50.932 "num_base_bdevs_discovered": 4, 00:19:50.932 "num_base_bdevs_operational": 4, 00:19:50.932 "base_bdevs_list": [ 00:19:50.932 { 00:19:50.932 "name": "pt1", 00:19:50.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:50.932 "is_configured": true, 00:19:50.932 "data_offset": 2048, 00:19:50.932 "data_size": 63488 00:19:50.932 }, 00:19:50.932 { 00:19:50.932 "name": "pt2", 00:19:50.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.932 "is_configured": true, 00:19:50.932 "data_offset": 2048, 00:19:50.932 "data_size": 63488 00:19:50.932 }, 00:19:50.932 { 00:19:50.932 "name": "pt3", 00:19:50.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:50.932 "is_configured": true, 00:19:50.932 "data_offset": 2048, 00:19:50.932 "data_size": 63488 00:19:50.932 }, 00:19:50.932 { 00:19:50.932 "name": "pt4", 00:19:50.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:50.932 "is_configured": true, 00:19:50.932 "data_offset": 2048, 00:19:50.932 "data_size": 63488 00:19:50.932 } 00:19:50.932 ] 00:19:50.932 }' 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.932 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 [2024-11-26 20:32:44.686794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.192 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.192 "name": "raid_bdev1", 00:19:51.192 "aliases": [ 00:19:51.192 "7291c42b-cd68-4099-8518-017f9ff74563" 00:19:51.192 ], 00:19:51.192 "product_name": "Raid Volume", 00:19:51.192 "block_size": 512, 00:19:51.192 "num_blocks": 190464, 00:19:51.192 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:51.192 "assigned_rate_limits": { 00:19:51.193 "rw_ios_per_sec": 0, 00:19:51.193 "rw_mbytes_per_sec": 0, 00:19:51.193 "r_mbytes_per_sec": 0, 00:19:51.193 "w_mbytes_per_sec": 0 00:19:51.193 }, 00:19:51.193 "claimed": false, 00:19:51.193 "zoned": false, 00:19:51.193 "supported_io_types": { 00:19:51.193 "read": true, 00:19:51.193 "write": true, 00:19:51.193 "unmap": false, 00:19:51.193 "flush": false, 00:19:51.193 "reset": true, 00:19:51.193 "nvme_admin": false, 00:19:51.193 "nvme_io": false, 00:19:51.193 "nvme_io_md": false, 00:19:51.193 "write_zeroes": true, 00:19:51.193 "zcopy": false, 00:19:51.193 "get_zone_info": false, 00:19:51.193 "zone_management": false, 00:19:51.193 "zone_append": false, 00:19:51.193 "compare": false, 00:19:51.193 "compare_and_write": false, 00:19:51.193 "abort": false, 00:19:51.193 "seek_hole": false, 00:19:51.193 "seek_data": false, 00:19:51.193 "copy": false, 00:19:51.193 "nvme_iov_md": false 00:19:51.193 }, 00:19:51.193 "driver_specific": { 00:19:51.193 "raid": { 00:19:51.193 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:51.193 "strip_size_kb": 64, 00:19:51.193 "state": "online", 00:19:51.193 "raid_level": "raid5f", 00:19:51.193 "superblock": true, 00:19:51.193 "num_base_bdevs": 4, 00:19:51.193 "num_base_bdevs_discovered": 4, 00:19:51.193 "num_base_bdevs_operational": 4, 00:19:51.193 "base_bdevs_list": [ 00:19:51.193 { 00:19:51.193 "name": "pt1", 00:19:51.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "pt2", 00:19:51.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "pt3", 00:19:51.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "pt4", 00:19:51.193 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 } 00:19:51.193 ] 00:19:51.193 } 00:19:51.193 } 00:19:51.193 }' 00:19:51.193 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:51.452 pt2 00:19:51.452 pt3 00:19:51.452 pt4' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.452 20:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.711 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.711 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.711 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:51.711 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.711 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.711 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:51.712 [2024-11-26 20:32:45.022279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7291c42b-cd68-4099-8518-017f9ff74563 '!=' 7291c42b-cd68-4099-8518-017f9ff74563 ']' 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.712 [2024-11-26 20:32:45.066030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.712 "name": "raid_bdev1", 00:19:51.712 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:51.712 "strip_size_kb": 64, 00:19:51.712 "state": "online", 00:19:51.712 "raid_level": "raid5f", 00:19:51.712 "superblock": true, 00:19:51.712 "num_base_bdevs": 4, 00:19:51.712 "num_base_bdevs_discovered": 3, 00:19:51.712 "num_base_bdevs_operational": 3, 00:19:51.712 "base_bdevs_list": [ 00:19:51.712 { 00:19:51.712 "name": null, 00:19:51.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.712 "is_configured": false, 00:19:51.712 "data_offset": 0, 00:19:51.712 "data_size": 63488 00:19:51.712 }, 00:19:51.712 { 00:19:51.712 "name": "pt2", 00:19:51.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.712 "is_configured": true, 00:19:51.712 "data_offset": 2048, 00:19:51.712 "data_size": 63488 00:19:51.712 }, 00:19:51.712 { 00:19:51.712 "name": "pt3", 00:19:51.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:51.712 "is_configured": true, 00:19:51.712 "data_offset": 2048, 00:19:51.712 "data_size": 63488 00:19:51.712 }, 00:19:51.712 { 00:19:51.712 "name": "pt4", 00:19:51.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:51.712 "is_configured": true, 00:19:51.712 "data_offset": 2048, 00:19:51.712 "data_size": 63488 00:19:51.712 } 00:19:51.712 ] 00:19:51.712 }' 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.712 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 [2024-11-26 20:32:45.537209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.281 [2024-11-26 20:32:45.537260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.281 [2024-11-26 20:32:45.537354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.281 [2024-11-26 20:32:45.537439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.281 [2024-11-26 20:32:45.537457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 [2024-11-26 20:32:45.613101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:52.281 [2024-11-26 20:32:45.613162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.281 [2024-11-26 20:32:45.613184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:52.281 [2024-11-26 20:32:45.613194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.281 [2024-11-26 20:32:45.615573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.281 [2024-11-26 20:32:45.615609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:52.281 [2024-11-26 20:32:45.615694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:52.281 [2024-11-26 20:32:45.615742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:52.281 pt2 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.281 "name": "raid_bdev1", 00:19:52.281 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:52.281 "strip_size_kb": 64, 00:19:52.281 "state": "configuring", 00:19:52.281 "raid_level": "raid5f", 00:19:52.281 "superblock": true, 00:19:52.281 "num_base_bdevs": 4, 00:19:52.281 "num_base_bdevs_discovered": 1, 00:19:52.281 "num_base_bdevs_operational": 3, 00:19:52.281 "base_bdevs_list": [ 00:19:52.281 { 00:19:52.281 "name": null, 00:19:52.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.281 "is_configured": false, 00:19:52.281 "data_offset": 2048, 00:19:52.281 "data_size": 63488 00:19:52.281 }, 00:19:52.281 { 00:19:52.281 "name": "pt2", 00:19:52.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.281 "is_configured": true, 00:19:52.281 "data_offset": 2048, 00:19:52.281 "data_size": 63488 00:19:52.281 }, 00:19:52.281 { 00:19:52.281 "name": null, 00:19:52.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:52.281 "is_configured": false, 00:19:52.281 "data_offset": 2048, 00:19:52.281 "data_size": 63488 00:19:52.281 }, 00:19:52.281 { 00:19:52.281 "name": null, 00:19:52.281 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:52.281 "is_configured": false, 00:19:52.281 "data_offset": 2048, 00:19:52.281 "data_size": 63488 00:19:52.281 } 00:19:52.281 ] 00:19:52.281 }' 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.281 20:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.850 [2024-11-26 20:32:46.104333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:52.850 [2024-11-26 20:32:46.104424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.850 [2024-11-26 20:32:46.104453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:52.850 [2024-11-26 20:32:46.104468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.850 [2024-11-26 20:32:46.105015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.850 [2024-11-26 20:32:46.105043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:52.850 [2024-11-26 20:32:46.105143] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:52.850 [2024-11-26 20:32:46.105179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:52.850 pt3 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.850 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.850 "name": "raid_bdev1", 00:19:52.850 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:52.850 "strip_size_kb": 64, 00:19:52.850 "state": "configuring", 00:19:52.850 "raid_level": "raid5f", 00:19:52.850 "superblock": true, 00:19:52.850 "num_base_bdevs": 4, 00:19:52.850 "num_base_bdevs_discovered": 2, 00:19:52.850 "num_base_bdevs_operational": 3, 00:19:52.850 "base_bdevs_list": [ 00:19:52.850 { 00:19:52.850 "name": null, 00:19:52.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.850 "is_configured": false, 00:19:52.850 "data_offset": 2048, 00:19:52.850 "data_size": 63488 00:19:52.850 }, 00:19:52.850 { 00:19:52.850 "name": "pt2", 00:19:52.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.850 "is_configured": true, 00:19:52.850 "data_offset": 2048, 00:19:52.850 "data_size": 63488 00:19:52.850 }, 00:19:52.851 { 00:19:52.851 "name": "pt3", 00:19:52.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:52.851 "is_configured": true, 00:19:52.851 "data_offset": 2048, 00:19:52.851 "data_size": 63488 00:19:52.851 }, 00:19:52.851 { 00:19:52.851 "name": null, 00:19:52.851 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:52.851 "is_configured": false, 00:19:52.851 "data_offset": 2048, 00:19:52.851 "data_size": 63488 00:19:52.851 } 00:19:52.851 ] 00:19:52.851 }' 00:19:52.851 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.851 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.111 [2024-11-26 20:32:46.563565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:53.111 [2024-11-26 20:32:46.563644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.111 [2024-11-26 20:32:46.563670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:53.111 [2024-11-26 20:32:46.563681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.111 [2024-11-26 20:32:46.564217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.111 [2024-11-26 20:32:46.564267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:53.111 [2024-11-26 20:32:46.564369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:53.111 [2024-11-26 20:32:46.564411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:53.111 [2024-11-26 20:32:46.564573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:53.111 [2024-11-26 20:32:46.564591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:53.111 [2024-11-26 20:32:46.564859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:53.111 [2024-11-26 20:32:46.572847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:53.111 [2024-11-26 20:32:46.572876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:53.111 [2024-11-26 20:32:46.573271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.111 pt4 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.111 "name": "raid_bdev1", 00:19:53.111 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:53.111 "strip_size_kb": 64, 00:19:53.111 "state": "online", 00:19:53.111 "raid_level": "raid5f", 00:19:53.111 "superblock": true, 00:19:53.111 "num_base_bdevs": 4, 00:19:53.111 "num_base_bdevs_discovered": 3, 00:19:53.111 "num_base_bdevs_operational": 3, 00:19:53.111 "base_bdevs_list": [ 00:19:53.111 { 00:19:53.111 "name": null, 00:19:53.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.111 "is_configured": false, 00:19:53.111 "data_offset": 2048, 00:19:53.111 "data_size": 63488 00:19:53.111 }, 00:19:53.111 { 00:19:53.111 "name": "pt2", 00:19:53.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.111 "is_configured": true, 00:19:53.111 "data_offset": 2048, 00:19:53.111 "data_size": 63488 00:19:53.111 }, 00:19:53.111 { 00:19:53.111 "name": "pt3", 00:19:53.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:53.111 "is_configured": true, 00:19:53.111 "data_offset": 2048, 00:19:53.111 "data_size": 63488 00:19:53.111 }, 00:19:53.111 { 00:19:53.111 "name": "pt4", 00:19:53.111 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:53.111 "is_configured": true, 00:19:53.111 "data_offset": 2048, 00:19:53.111 "data_size": 63488 00:19:53.111 } 00:19:53.111 ] 00:19:53.111 }' 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.111 20:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 [2024-11-26 20:32:47.027914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.680 [2024-11-26 20:32:47.027951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.680 [2024-11-26 20:32:47.028052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.680 [2024-11-26 20:32:47.028145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.680 [2024-11-26 20:32:47.028169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 [2024-11-26 20:32:47.103778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:53.680 [2024-11-26 20:32:47.103868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.680 [2024-11-26 20:32:47.103899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:53.680 [2024-11-26 20:32:47.103915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.680 [2024-11-26 20:32:47.106600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.680 [2024-11-26 20:32:47.106645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:53.680 [2024-11-26 20:32:47.106746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:53.680 [2024-11-26 20:32:47.106807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:53.680 [2024-11-26 20:32:47.106995] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:53.680 [2024-11-26 20:32:47.107019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.680 [2024-11-26 20:32:47.107051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:53.680 [2024-11-26 20:32:47.107136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.680 [2024-11-26 20:32:47.107289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:53.680 pt1 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.680 "name": "raid_bdev1", 00:19:53.680 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:53.680 "strip_size_kb": 64, 00:19:53.680 "state": "configuring", 00:19:53.680 "raid_level": "raid5f", 00:19:53.680 "superblock": true, 00:19:53.680 "num_base_bdevs": 4, 00:19:53.680 "num_base_bdevs_discovered": 2, 00:19:53.680 "num_base_bdevs_operational": 3, 00:19:53.680 "base_bdevs_list": [ 00:19:53.680 { 00:19:53.680 "name": null, 00:19:53.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.680 "is_configured": false, 00:19:53.680 "data_offset": 2048, 00:19:53.680 "data_size": 63488 00:19:53.680 }, 00:19:53.680 { 00:19:53.680 "name": "pt2", 00:19:53.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.680 "is_configured": true, 00:19:53.680 "data_offset": 2048, 00:19:53.680 "data_size": 63488 00:19:53.680 }, 00:19:53.680 { 00:19:53.680 "name": "pt3", 00:19:53.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:53.680 "is_configured": true, 00:19:53.680 "data_offset": 2048, 00:19:53.680 "data_size": 63488 00:19:53.680 }, 00:19:53.680 { 00:19:53.680 "name": null, 00:19:53.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:53.680 "is_configured": false, 00:19:53.680 "data_offset": 2048, 00:19:53.680 "data_size": 63488 00:19:53.680 } 00:19:53.680 ] 00:19:53.680 }' 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.680 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.249 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.249 [2024-11-26 20:32:47.619010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:54.249 [2024-11-26 20:32:47.619088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.249 [2024-11-26 20:32:47.619119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:54.249 [2024-11-26 20:32:47.619130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.249 [2024-11-26 20:32:47.619680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.249 [2024-11-26 20:32:47.619715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:54.249 [2024-11-26 20:32:47.619822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:54.249 [2024-11-26 20:32:47.619857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:54.249 [2024-11-26 20:32:47.620011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:54.249 [2024-11-26 20:32:47.620030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:54.250 [2024-11-26 20:32:47.620344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:54.250 [2024-11-26 20:32:47.630239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:54.250 [2024-11-26 20:32:47.630283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:54.250 pt4 00:19:54.250 [2024-11-26 20:32:47.630621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.250 "name": "raid_bdev1", 00:19:54.250 "uuid": "7291c42b-cd68-4099-8518-017f9ff74563", 00:19:54.250 "strip_size_kb": 64, 00:19:54.250 "state": "online", 00:19:54.250 "raid_level": "raid5f", 00:19:54.250 "superblock": true, 00:19:54.250 "num_base_bdevs": 4, 00:19:54.250 "num_base_bdevs_discovered": 3, 00:19:54.250 "num_base_bdevs_operational": 3, 00:19:54.250 "base_bdevs_list": [ 00:19:54.250 { 00:19:54.250 "name": null, 00:19:54.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.250 "is_configured": false, 00:19:54.250 "data_offset": 2048, 00:19:54.250 "data_size": 63488 00:19:54.250 }, 00:19:54.250 { 00:19:54.250 "name": "pt2", 00:19:54.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:54.250 "is_configured": true, 00:19:54.250 "data_offset": 2048, 00:19:54.250 "data_size": 63488 00:19:54.250 }, 00:19:54.250 { 00:19:54.250 "name": "pt3", 00:19:54.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:54.250 "is_configured": true, 00:19:54.250 "data_offset": 2048, 00:19:54.250 "data_size": 63488 00:19:54.250 }, 00:19:54.250 { 00:19:54.250 "name": "pt4", 00:19:54.250 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:54.250 "is_configured": true, 00:19:54.250 "data_offset": 2048, 00:19:54.250 "data_size": 63488 00:19:54.250 } 00:19:54.250 ] 00:19:54.250 }' 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.250 20:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.828 [2024-11-26 20:32:48.157433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7291c42b-cd68-4099-8518-017f9ff74563 '!=' 7291c42b-cd68-4099-8518-017f9ff74563 ']' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84587 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84587 ']' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84587 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84587 00:19:54.828 killing process with pid 84587 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84587' 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84587 00:19:54.828 20:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84587 00:19:54.828 [2024-11-26 20:32:48.244395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.828 [2024-11-26 20:32:48.244504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.828 [2024-11-26 20:32:48.244606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.828 [2024-11-26 20:32:48.244630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:55.403 [2024-11-26 20:32:48.692331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:56.783 20:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:56.783 00:19:56.783 real 0m8.942s 00:19:56.783 user 0m14.018s 00:19:56.783 sys 0m1.614s 00:19:56.783 20:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.783 20:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.783 ************************************ 00:19:56.783 END TEST raid5f_superblock_test 00:19:56.783 ************************************ 00:19:56.783 20:32:49 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:56.783 20:32:49 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:56.783 20:32:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:56.783 20:32:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.783 20:32:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.783 ************************************ 00:19:56.784 START TEST raid5f_rebuild_test 00:19:56.784 ************************************ 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:56.784 20:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85074 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85074 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85074 ']' 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.784 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.784 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:56.784 Zero copy mechanism will not be used. 00:19:56.784 [2024-11-26 20:32:50.096001] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:19:56.784 [2024-11-26 20:32:50.096119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85074 ] 00:19:56.784 [2024-11-26 20:32:50.272300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.044 [2024-11-26 20:32:50.391487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.307 [2024-11-26 20:32:50.603162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.307 [2024-11-26 20:32:50.603213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.566 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.566 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:57.566 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:57.566 20:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:57.566 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.566 20:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.566 BaseBdev1_malloc 00:19:57.566 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.566 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.566 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.566 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.567 [2024-11-26 20:32:51.021198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.567 [2024-11-26 20:32:51.021276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.567 [2024-11-26 20:32:51.021310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:57.567 [2024-11-26 20:32:51.021333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.567 [2024-11-26 20:32:51.023643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.567 [2024-11-26 20:32:51.023691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.567 BaseBdev1 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.567 BaseBdev2_malloc 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.567 [2024-11-26 20:32:51.077745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:57.567 [2024-11-26 20:32:51.077819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.567 [2024-11-26 20:32:51.077858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:57.567 [2024-11-26 20:32:51.077880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.567 [2024-11-26 20:32:51.080104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.567 [2024-11-26 20:32:51.080150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:57.567 BaseBdev2 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.567 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 BaseBdev3_malloc 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 [2024-11-26 20:32:51.151200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:57.828 [2024-11-26 20:32:51.151304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.828 [2024-11-26 20:32:51.151349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:57.828 [2024-11-26 20:32:51.151373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.828 [2024-11-26 20:32:51.153798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.828 [2024-11-26 20:32:51.153854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:57.828 BaseBdev3 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 BaseBdev4_malloc 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 [2024-11-26 20:32:51.209550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:57.828 [2024-11-26 20:32:51.209633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.828 [2024-11-26 20:32:51.209676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:57.828 [2024-11-26 20:32:51.209696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.828 [2024-11-26 20:32:51.211969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.828 [2024-11-26 20:32:51.212018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:57.828 BaseBdev4 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 spare_malloc 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 spare_delay 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 [2024-11-26 20:32:51.280090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.828 [2024-11-26 20:32:51.280159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.828 [2024-11-26 20:32:51.280189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:57.828 [2024-11-26 20:32:51.280211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.828 [2024-11-26 20:32:51.282645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.828 [2024-11-26 20:32:51.282695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.828 spare 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 [2024-11-26 20:32:51.292128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.828 [2024-11-26 20:32:51.294190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.828 [2024-11-26 20:32:51.294317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.828 [2024-11-26 20:32:51.294409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:57.828 [2024-11-26 20:32:51.294561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:57.828 [2024-11-26 20:32:51.294587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:57.828 [2024-11-26 20:32:51.294944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.828 [2024-11-26 20:32:51.303190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:57.828 [2024-11-26 20:32:51.303216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:57.828 [2024-11-26 20:32:51.303453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.828 "name": "raid_bdev1", 00:19:57.828 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:19:57.828 "strip_size_kb": 64, 00:19:57.828 "state": "online", 00:19:57.828 "raid_level": "raid5f", 00:19:57.828 "superblock": false, 00:19:57.828 "num_base_bdevs": 4, 00:19:57.828 "num_base_bdevs_discovered": 4, 00:19:57.828 "num_base_bdevs_operational": 4, 00:19:57.828 "base_bdevs_list": [ 00:19:57.828 { 00:19:57.828 "name": "BaseBdev1", 00:19:57.828 "uuid": "6edcd92a-04d8-573c-9fed-98d6fc984303", 00:19:57.828 "is_configured": true, 00:19:57.828 "data_offset": 0, 00:19:57.828 "data_size": 65536 00:19:57.828 }, 00:19:57.828 { 00:19:57.828 "name": "BaseBdev2", 00:19:57.828 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:19:57.828 "is_configured": true, 00:19:57.828 "data_offset": 0, 00:19:57.828 "data_size": 65536 00:19:57.828 }, 00:19:57.828 { 00:19:57.828 "name": "BaseBdev3", 00:19:57.828 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:19:57.828 "is_configured": true, 00:19:57.828 "data_offset": 0, 00:19:57.828 "data_size": 65536 00:19:57.828 }, 00:19:57.828 { 00:19:57.828 "name": "BaseBdev4", 00:19:57.828 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:19:57.828 "is_configured": true, 00:19:57.828 "data_offset": 0, 00:19:57.828 "data_size": 65536 00:19:57.828 } 00:19:57.828 ] 00:19:57.828 }' 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.828 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.397 [2024-11-26 20:32:51.744221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.397 20:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:58.657 [2024-11-26 20:32:52.019564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:58.657 /dev/nbd0 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.657 1+0 records in 00:19:58.657 1+0 records out 00:19:58.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458875 s, 8.9 MB/s 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:58.657 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:59.226 512+0 records in 00:19:59.226 512+0 records out 00:19:59.226 100663296 bytes (101 MB, 96 MiB) copied, 0.550742 s, 183 MB/s 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.226 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.485 [2024-11-26 20:32:52.885601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.485 [2024-11-26 20:32:52.904494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.485 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.486 "name": "raid_bdev1", 00:19:59.486 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:19:59.486 "strip_size_kb": 64, 00:19:59.486 "state": "online", 00:19:59.486 "raid_level": "raid5f", 00:19:59.486 "superblock": false, 00:19:59.486 "num_base_bdevs": 4, 00:19:59.486 "num_base_bdevs_discovered": 3, 00:19:59.486 "num_base_bdevs_operational": 3, 00:19:59.486 "base_bdevs_list": [ 00:19:59.486 { 00:19:59.486 "name": null, 00:19:59.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.486 "is_configured": false, 00:19:59.486 "data_offset": 0, 00:19:59.486 "data_size": 65536 00:19:59.486 }, 00:19:59.486 { 00:19:59.486 "name": "BaseBdev2", 00:19:59.486 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:19:59.486 "is_configured": true, 00:19:59.486 "data_offset": 0, 00:19:59.486 "data_size": 65536 00:19:59.486 }, 00:19:59.486 { 00:19:59.486 "name": "BaseBdev3", 00:19:59.486 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:19:59.486 "is_configured": true, 00:19:59.486 "data_offset": 0, 00:19:59.486 "data_size": 65536 00:19:59.486 }, 00:19:59.486 { 00:19:59.486 "name": "BaseBdev4", 00:19:59.486 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:19:59.486 "is_configured": true, 00:19:59.486 "data_offset": 0, 00:19:59.486 "data_size": 65536 00:19:59.486 } 00:19:59.486 ] 00:19:59.486 }' 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.486 20:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.055 20:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:00.055 20:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.055 20:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.055 [2024-11-26 20:32:53.359747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.055 [2024-11-26 20:32:53.379083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:00.055 20:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.055 20:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:00.055 [2024-11-26 20:32:53.390704] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.992 "name": "raid_bdev1", 00:20:00.992 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:00.992 "strip_size_kb": 64, 00:20:00.992 "state": "online", 00:20:00.992 "raid_level": "raid5f", 00:20:00.992 "superblock": false, 00:20:00.992 "num_base_bdevs": 4, 00:20:00.992 "num_base_bdevs_discovered": 4, 00:20:00.992 "num_base_bdevs_operational": 4, 00:20:00.992 "process": { 00:20:00.992 "type": "rebuild", 00:20:00.992 "target": "spare", 00:20:00.992 "progress": { 00:20:00.992 "blocks": 17280, 00:20:00.992 "percent": 8 00:20:00.992 } 00:20:00.992 }, 00:20:00.992 "base_bdevs_list": [ 00:20:00.992 { 00:20:00.992 "name": "spare", 00:20:00.992 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:00.992 "is_configured": true, 00:20:00.992 "data_offset": 0, 00:20:00.992 "data_size": 65536 00:20:00.992 }, 00:20:00.992 { 00:20:00.992 "name": "BaseBdev2", 00:20:00.992 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:00.992 "is_configured": true, 00:20:00.992 "data_offset": 0, 00:20:00.992 "data_size": 65536 00:20:00.992 }, 00:20:00.992 { 00:20:00.992 "name": "BaseBdev3", 00:20:00.992 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:00.992 "is_configured": true, 00:20:00.992 "data_offset": 0, 00:20:00.992 "data_size": 65536 00:20:00.992 }, 00:20:00.992 { 00:20:00.992 "name": "BaseBdev4", 00:20:00.992 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:00.992 "is_configured": true, 00:20:00.992 "data_offset": 0, 00:20:00.992 "data_size": 65536 00:20:00.992 } 00:20:00.992 ] 00:20:00.992 }' 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.992 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.251 [2024-11-26 20:32:54.554191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.251 [2024-11-26 20:32:54.600418] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:01.251 [2024-11-26 20:32:54.600529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.251 [2024-11-26 20:32:54.600552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.251 [2024-11-26 20:32:54.600569] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.251 "name": "raid_bdev1", 00:20:01.251 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:01.251 "strip_size_kb": 64, 00:20:01.251 "state": "online", 00:20:01.251 "raid_level": "raid5f", 00:20:01.251 "superblock": false, 00:20:01.251 "num_base_bdevs": 4, 00:20:01.251 "num_base_bdevs_discovered": 3, 00:20:01.251 "num_base_bdevs_operational": 3, 00:20:01.251 "base_bdevs_list": [ 00:20:01.251 { 00:20:01.251 "name": null, 00:20:01.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.251 "is_configured": false, 00:20:01.251 "data_offset": 0, 00:20:01.251 "data_size": 65536 00:20:01.251 }, 00:20:01.251 { 00:20:01.251 "name": "BaseBdev2", 00:20:01.251 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:01.251 "is_configured": true, 00:20:01.251 "data_offset": 0, 00:20:01.251 "data_size": 65536 00:20:01.251 }, 00:20:01.251 { 00:20:01.251 "name": "BaseBdev3", 00:20:01.251 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:01.251 "is_configured": true, 00:20:01.251 "data_offset": 0, 00:20:01.251 "data_size": 65536 00:20:01.251 }, 00:20:01.251 { 00:20:01.251 "name": "BaseBdev4", 00:20:01.251 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:01.251 "is_configured": true, 00:20:01.251 "data_offset": 0, 00:20:01.251 "data_size": 65536 00:20:01.251 } 00:20:01.251 ] 00:20:01.251 }' 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.251 20:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.820 "name": "raid_bdev1", 00:20:01.820 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:01.820 "strip_size_kb": 64, 00:20:01.820 "state": "online", 00:20:01.820 "raid_level": "raid5f", 00:20:01.820 "superblock": false, 00:20:01.820 "num_base_bdevs": 4, 00:20:01.820 "num_base_bdevs_discovered": 3, 00:20:01.820 "num_base_bdevs_operational": 3, 00:20:01.820 "base_bdevs_list": [ 00:20:01.820 { 00:20:01.820 "name": null, 00:20:01.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.820 "is_configured": false, 00:20:01.820 "data_offset": 0, 00:20:01.820 "data_size": 65536 00:20:01.820 }, 00:20:01.820 { 00:20:01.820 "name": "BaseBdev2", 00:20:01.820 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:01.820 "is_configured": true, 00:20:01.820 "data_offset": 0, 00:20:01.820 "data_size": 65536 00:20:01.820 }, 00:20:01.820 { 00:20:01.820 "name": "BaseBdev3", 00:20:01.820 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:01.820 "is_configured": true, 00:20:01.820 "data_offset": 0, 00:20:01.820 "data_size": 65536 00:20:01.820 }, 00:20:01.820 { 00:20:01.820 "name": "BaseBdev4", 00:20:01.820 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:01.820 "is_configured": true, 00:20:01.820 "data_offset": 0, 00:20:01.820 "data_size": 65536 00:20:01.820 } 00:20:01.820 ] 00:20:01.820 }' 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.820 [2024-11-26 20:32:55.264666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.820 [2024-11-26 20:32:55.282796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.820 20:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:01.820 [2024-11-26 20:32:55.295203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.764 20:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.023 "name": "raid_bdev1", 00:20:03.023 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:03.023 "strip_size_kb": 64, 00:20:03.023 "state": "online", 00:20:03.023 "raid_level": "raid5f", 00:20:03.023 "superblock": false, 00:20:03.023 "num_base_bdevs": 4, 00:20:03.023 "num_base_bdevs_discovered": 4, 00:20:03.023 "num_base_bdevs_operational": 4, 00:20:03.023 "process": { 00:20:03.023 "type": "rebuild", 00:20:03.023 "target": "spare", 00:20:03.023 "progress": { 00:20:03.023 "blocks": 17280, 00:20:03.023 "percent": 8 00:20:03.023 } 00:20:03.023 }, 00:20:03.023 "base_bdevs_list": [ 00:20:03.023 { 00:20:03.023 "name": "spare", 00:20:03.023 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:03.023 "is_configured": true, 00:20:03.023 "data_offset": 0, 00:20:03.023 "data_size": 65536 00:20:03.023 }, 00:20:03.023 { 00:20:03.023 "name": "BaseBdev2", 00:20:03.023 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:03.023 "is_configured": true, 00:20:03.023 "data_offset": 0, 00:20:03.023 "data_size": 65536 00:20:03.023 }, 00:20:03.023 { 00:20:03.023 "name": "BaseBdev3", 00:20:03.023 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:03.023 "is_configured": true, 00:20:03.023 "data_offset": 0, 00:20:03.023 "data_size": 65536 00:20:03.023 }, 00:20:03.023 { 00:20:03.023 "name": "BaseBdev4", 00:20:03.023 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:03.023 "is_configured": true, 00:20:03.023 "data_offset": 0, 00:20:03.023 "data_size": 65536 00:20:03.023 } 00:20:03.023 ] 00:20:03.023 }' 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=649 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.023 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.023 "name": "raid_bdev1", 00:20:03.023 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:03.023 "strip_size_kb": 64, 00:20:03.023 "state": "online", 00:20:03.023 "raid_level": "raid5f", 00:20:03.023 "superblock": false, 00:20:03.023 "num_base_bdevs": 4, 00:20:03.023 "num_base_bdevs_discovered": 4, 00:20:03.023 "num_base_bdevs_operational": 4, 00:20:03.023 "process": { 00:20:03.023 "type": "rebuild", 00:20:03.023 "target": "spare", 00:20:03.023 "progress": { 00:20:03.023 "blocks": 21120, 00:20:03.023 "percent": 10 00:20:03.023 } 00:20:03.024 }, 00:20:03.024 "base_bdevs_list": [ 00:20:03.024 { 00:20:03.024 "name": "spare", 00:20:03.024 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:03.024 "is_configured": true, 00:20:03.024 "data_offset": 0, 00:20:03.024 "data_size": 65536 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "name": "BaseBdev2", 00:20:03.024 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:03.024 "is_configured": true, 00:20:03.024 "data_offset": 0, 00:20:03.024 "data_size": 65536 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "name": "BaseBdev3", 00:20:03.024 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:03.024 "is_configured": true, 00:20:03.024 "data_offset": 0, 00:20:03.024 "data_size": 65536 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "name": "BaseBdev4", 00:20:03.024 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:03.024 "is_configured": true, 00:20:03.024 "data_offset": 0, 00:20:03.024 "data_size": 65536 00:20:03.024 } 00:20:03.024 ] 00:20:03.024 }' 00:20:03.024 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.024 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:03.024 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.024 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:03.024 20:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.403 "name": "raid_bdev1", 00:20:04.403 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:04.403 "strip_size_kb": 64, 00:20:04.403 "state": "online", 00:20:04.403 "raid_level": "raid5f", 00:20:04.403 "superblock": false, 00:20:04.403 "num_base_bdevs": 4, 00:20:04.403 "num_base_bdevs_discovered": 4, 00:20:04.403 "num_base_bdevs_operational": 4, 00:20:04.403 "process": { 00:20:04.403 "type": "rebuild", 00:20:04.403 "target": "spare", 00:20:04.403 "progress": { 00:20:04.403 "blocks": 42240, 00:20:04.403 "percent": 21 00:20:04.403 } 00:20:04.403 }, 00:20:04.403 "base_bdevs_list": [ 00:20:04.403 { 00:20:04.403 "name": "spare", 00:20:04.403 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:04.403 "is_configured": true, 00:20:04.403 "data_offset": 0, 00:20:04.403 "data_size": 65536 00:20:04.403 }, 00:20:04.403 { 00:20:04.403 "name": "BaseBdev2", 00:20:04.403 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:04.403 "is_configured": true, 00:20:04.403 "data_offset": 0, 00:20:04.403 "data_size": 65536 00:20:04.403 }, 00:20:04.403 { 00:20:04.403 "name": "BaseBdev3", 00:20:04.403 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:04.403 "is_configured": true, 00:20:04.403 "data_offset": 0, 00:20:04.403 "data_size": 65536 00:20:04.403 }, 00:20:04.403 { 00:20:04.403 "name": "BaseBdev4", 00:20:04.403 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:04.403 "is_configured": true, 00:20:04.403 "data_offset": 0, 00:20:04.403 "data_size": 65536 00:20:04.403 } 00:20:04.403 ] 00:20:04.403 }' 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.403 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.404 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.404 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.404 20:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.340 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.340 "name": "raid_bdev1", 00:20:05.340 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:05.340 "strip_size_kb": 64, 00:20:05.340 "state": "online", 00:20:05.340 "raid_level": "raid5f", 00:20:05.340 "superblock": false, 00:20:05.340 "num_base_bdevs": 4, 00:20:05.340 "num_base_bdevs_discovered": 4, 00:20:05.340 "num_base_bdevs_operational": 4, 00:20:05.340 "process": { 00:20:05.340 "type": "rebuild", 00:20:05.340 "target": "spare", 00:20:05.340 "progress": { 00:20:05.340 "blocks": 65280, 00:20:05.340 "percent": 33 00:20:05.340 } 00:20:05.340 }, 00:20:05.340 "base_bdevs_list": [ 00:20:05.340 { 00:20:05.340 "name": "spare", 00:20:05.340 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:05.340 "is_configured": true, 00:20:05.340 "data_offset": 0, 00:20:05.340 "data_size": 65536 00:20:05.340 }, 00:20:05.340 { 00:20:05.340 "name": "BaseBdev2", 00:20:05.340 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:05.341 "is_configured": true, 00:20:05.341 "data_offset": 0, 00:20:05.341 "data_size": 65536 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "name": "BaseBdev3", 00:20:05.341 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:05.341 "is_configured": true, 00:20:05.341 "data_offset": 0, 00:20:05.341 "data_size": 65536 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "name": "BaseBdev4", 00:20:05.341 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:05.341 "is_configured": true, 00:20:05.341 "data_offset": 0, 00:20:05.341 "data_size": 65536 00:20:05.341 } 00:20:05.341 ] 00:20:05.341 }' 00:20:05.341 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.341 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.341 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.341 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.341 20:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.720 20:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.721 "name": "raid_bdev1", 00:20:06.721 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:06.721 "strip_size_kb": 64, 00:20:06.721 "state": "online", 00:20:06.721 "raid_level": "raid5f", 00:20:06.721 "superblock": false, 00:20:06.721 "num_base_bdevs": 4, 00:20:06.721 "num_base_bdevs_discovered": 4, 00:20:06.721 "num_base_bdevs_operational": 4, 00:20:06.721 "process": { 00:20:06.721 "type": "rebuild", 00:20:06.721 "target": "spare", 00:20:06.721 "progress": { 00:20:06.721 "blocks": 86400, 00:20:06.721 "percent": 43 00:20:06.721 } 00:20:06.721 }, 00:20:06.721 "base_bdevs_list": [ 00:20:06.721 { 00:20:06.721 "name": "spare", 00:20:06.721 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:06.721 "is_configured": true, 00:20:06.721 "data_offset": 0, 00:20:06.721 "data_size": 65536 00:20:06.721 }, 00:20:06.721 { 00:20:06.721 "name": "BaseBdev2", 00:20:06.721 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:06.721 "is_configured": true, 00:20:06.721 "data_offset": 0, 00:20:06.721 "data_size": 65536 00:20:06.721 }, 00:20:06.721 { 00:20:06.721 "name": "BaseBdev3", 00:20:06.721 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:06.721 "is_configured": true, 00:20:06.721 "data_offset": 0, 00:20:06.721 "data_size": 65536 00:20:06.721 }, 00:20:06.721 { 00:20:06.721 "name": "BaseBdev4", 00:20:06.721 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:06.721 "is_configured": true, 00:20:06.721 "data_offset": 0, 00:20:06.721 "data_size": 65536 00:20:06.721 } 00:20:06.721 ] 00:20:06.721 }' 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.721 20:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.662 "name": "raid_bdev1", 00:20:07.662 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:07.662 "strip_size_kb": 64, 00:20:07.662 "state": "online", 00:20:07.662 "raid_level": "raid5f", 00:20:07.662 "superblock": false, 00:20:07.662 "num_base_bdevs": 4, 00:20:07.662 "num_base_bdevs_discovered": 4, 00:20:07.662 "num_base_bdevs_operational": 4, 00:20:07.662 "process": { 00:20:07.662 "type": "rebuild", 00:20:07.662 "target": "spare", 00:20:07.662 "progress": { 00:20:07.662 "blocks": 107520, 00:20:07.662 "percent": 54 00:20:07.662 } 00:20:07.662 }, 00:20:07.662 "base_bdevs_list": [ 00:20:07.662 { 00:20:07.662 "name": "spare", 00:20:07.662 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:07.662 "is_configured": true, 00:20:07.662 "data_offset": 0, 00:20:07.662 "data_size": 65536 00:20:07.662 }, 00:20:07.662 { 00:20:07.662 "name": "BaseBdev2", 00:20:07.662 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:07.662 "is_configured": true, 00:20:07.662 "data_offset": 0, 00:20:07.662 "data_size": 65536 00:20:07.662 }, 00:20:07.662 { 00:20:07.662 "name": "BaseBdev3", 00:20:07.662 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:07.662 "is_configured": true, 00:20:07.662 "data_offset": 0, 00:20:07.662 "data_size": 65536 00:20:07.662 }, 00:20:07.662 { 00:20:07.662 "name": "BaseBdev4", 00:20:07.662 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:07.662 "is_configured": true, 00:20:07.662 "data_offset": 0, 00:20:07.662 "data_size": 65536 00:20:07.662 } 00:20:07.662 ] 00:20:07.662 }' 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.662 20:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.041 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.041 "name": "raid_bdev1", 00:20:09.041 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:09.041 "strip_size_kb": 64, 00:20:09.041 "state": "online", 00:20:09.041 "raid_level": "raid5f", 00:20:09.041 "superblock": false, 00:20:09.041 "num_base_bdevs": 4, 00:20:09.041 "num_base_bdevs_discovered": 4, 00:20:09.041 "num_base_bdevs_operational": 4, 00:20:09.041 "process": { 00:20:09.041 "type": "rebuild", 00:20:09.041 "target": "spare", 00:20:09.041 "progress": { 00:20:09.041 "blocks": 130560, 00:20:09.041 "percent": 66 00:20:09.041 } 00:20:09.041 }, 00:20:09.041 "base_bdevs_list": [ 00:20:09.041 { 00:20:09.041 "name": "spare", 00:20:09.041 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:09.041 "is_configured": true, 00:20:09.041 "data_offset": 0, 00:20:09.041 "data_size": 65536 00:20:09.041 }, 00:20:09.041 { 00:20:09.041 "name": "BaseBdev2", 00:20:09.041 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:09.041 "is_configured": true, 00:20:09.041 "data_offset": 0, 00:20:09.041 "data_size": 65536 00:20:09.041 }, 00:20:09.041 { 00:20:09.041 "name": "BaseBdev3", 00:20:09.041 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:09.041 "is_configured": true, 00:20:09.041 "data_offset": 0, 00:20:09.041 "data_size": 65536 00:20:09.041 }, 00:20:09.041 { 00:20:09.041 "name": "BaseBdev4", 00:20:09.042 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:09.042 "is_configured": true, 00:20:09.042 "data_offset": 0, 00:20:09.042 "data_size": 65536 00:20:09.042 } 00:20:09.042 ] 00:20:09.042 }' 00:20:09.042 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.042 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.042 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.042 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.042 20:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.981 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.981 "name": "raid_bdev1", 00:20:09.981 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:09.981 "strip_size_kb": 64, 00:20:09.981 "state": "online", 00:20:09.981 "raid_level": "raid5f", 00:20:09.981 "superblock": false, 00:20:09.981 "num_base_bdevs": 4, 00:20:09.981 "num_base_bdevs_discovered": 4, 00:20:09.981 "num_base_bdevs_operational": 4, 00:20:09.981 "process": { 00:20:09.981 "type": "rebuild", 00:20:09.981 "target": "spare", 00:20:09.981 "progress": { 00:20:09.981 "blocks": 151680, 00:20:09.981 "percent": 77 00:20:09.981 } 00:20:09.981 }, 00:20:09.981 "base_bdevs_list": [ 00:20:09.981 { 00:20:09.981 "name": "spare", 00:20:09.981 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:09.981 "is_configured": true, 00:20:09.981 "data_offset": 0, 00:20:09.981 "data_size": 65536 00:20:09.981 }, 00:20:09.981 { 00:20:09.981 "name": "BaseBdev2", 00:20:09.981 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:09.981 "is_configured": true, 00:20:09.981 "data_offset": 0, 00:20:09.981 "data_size": 65536 00:20:09.981 }, 00:20:09.981 { 00:20:09.981 "name": "BaseBdev3", 00:20:09.981 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:09.981 "is_configured": true, 00:20:09.981 "data_offset": 0, 00:20:09.981 "data_size": 65536 00:20:09.981 }, 00:20:09.981 { 00:20:09.981 "name": "BaseBdev4", 00:20:09.981 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:09.981 "is_configured": true, 00:20:09.981 "data_offset": 0, 00:20:09.981 "data_size": 65536 00:20:09.981 } 00:20:09.981 ] 00:20:09.981 }' 00:20:09.982 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.982 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.982 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.982 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.982 20:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.956 20:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.216 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.216 "name": "raid_bdev1", 00:20:11.216 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:11.216 "strip_size_kb": 64, 00:20:11.216 "state": "online", 00:20:11.216 "raid_level": "raid5f", 00:20:11.216 "superblock": false, 00:20:11.216 "num_base_bdevs": 4, 00:20:11.216 "num_base_bdevs_discovered": 4, 00:20:11.216 "num_base_bdevs_operational": 4, 00:20:11.216 "process": { 00:20:11.216 "type": "rebuild", 00:20:11.216 "target": "spare", 00:20:11.216 "progress": { 00:20:11.216 "blocks": 174720, 00:20:11.216 "percent": 88 00:20:11.216 } 00:20:11.216 }, 00:20:11.216 "base_bdevs_list": [ 00:20:11.216 { 00:20:11.216 "name": "spare", 00:20:11.216 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:11.216 "is_configured": true, 00:20:11.216 "data_offset": 0, 00:20:11.216 "data_size": 65536 00:20:11.216 }, 00:20:11.216 { 00:20:11.216 "name": "BaseBdev2", 00:20:11.216 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:11.216 "is_configured": true, 00:20:11.216 "data_offset": 0, 00:20:11.216 "data_size": 65536 00:20:11.216 }, 00:20:11.216 { 00:20:11.216 "name": "BaseBdev3", 00:20:11.216 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:11.216 "is_configured": true, 00:20:11.216 "data_offset": 0, 00:20:11.216 "data_size": 65536 00:20:11.216 }, 00:20:11.216 { 00:20:11.216 "name": "BaseBdev4", 00:20:11.216 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:11.216 "is_configured": true, 00:20:11.216 "data_offset": 0, 00:20:11.216 "data_size": 65536 00:20:11.216 } 00:20:11.216 ] 00:20:11.216 }' 00:20:11.216 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.216 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.216 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.216 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.216 20:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.156 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.157 "name": "raid_bdev1", 00:20:12.157 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:12.157 "strip_size_kb": 64, 00:20:12.157 "state": "online", 00:20:12.157 "raid_level": "raid5f", 00:20:12.157 "superblock": false, 00:20:12.157 "num_base_bdevs": 4, 00:20:12.157 "num_base_bdevs_discovered": 4, 00:20:12.157 "num_base_bdevs_operational": 4, 00:20:12.157 "process": { 00:20:12.157 "type": "rebuild", 00:20:12.157 "target": "spare", 00:20:12.157 "progress": { 00:20:12.157 "blocks": 195840, 00:20:12.157 "percent": 99 00:20:12.157 } 00:20:12.157 }, 00:20:12.157 "base_bdevs_list": [ 00:20:12.157 { 00:20:12.157 "name": "spare", 00:20:12.157 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:12.157 "is_configured": true, 00:20:12.157 "data_offset": 0, 00:20:12.157 "data_size": 65536 00:20:12.157 }, 00:20:12.157 { 00:20:12.157 "name": "BaseBdev2", 00:20:12.157 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:12.157 "is_configured": true, 00:20:12.157 "data_offset": 0, 00:20:12.157 "data_size": 65536 00:20:12.157 }, 00:20:12.157 { 00:20:12.157 "name": "BaseBdev3", 00:20:12.157 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:12.157 "is_configured": true, 00:20:12.157 "data_offset": 0, 00:20:12.157 "data_size": 65536 00:20:12.157 }, 00:20:12.157 { 00:20:12.157 "name": "BaseBdev4", 00:20:12.157 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:12.157 "is_configured": true, 00:20:12.157 "data_offset": 0, 00:20:12.157 "data_size": 65536 00:20:12.157 } 00:20:12.157 ] 00:20:12.157 }' 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.157 [2024-11-26 20:33:05.670380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:12.157 [2024-11-26 20:33:05.670459] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:12.157 [2024-11-26 20:33:05.670509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.157 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.417 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.417 20:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.367 "name": "raid_bdev1", 00:20:13.367 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:13.367 "strip_size_kb": 64, 00:20:13.367 "state": "online", 00:20:13.367 "raid_level": "raid5f", 00:20:13.367 "superblock": false, 00:20:13.367 "num_base_bdevs": 4, 00:20:13.367 "num_base_bdevs_discovered": 4, 00:20:13.367 "num_base_bdevs_operational": 4, 00:20:13.367 "base_bdevs_list": [ 00:20:13.367 { 00:20:13.367 "name": "spare", 00:20:13.367 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:13.367 "is_configured": true, 00:20:13.367 "data_offset": 0, 00:20:13.367 "data_size": 65536 00:20:13.367 }, 00:20:13.367 { 00:20:13.367 "name": "BaseBdev2", 00:20:13.367 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:13.367 "is_configured": true, 00:20:13.367 "data_offset": 0, 00:20:13.367 "data_size": 65536 00:20:13.367 }, 00:20:13.367 { 00:20:13.367 "name": "BaseBdev3", 00:20:13.367 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:13.367 "is_configured": true, 00:20:13.367 "data_offset": 0, 00:20:13.367 "data_size": 65536 00:20:13.367 }, 00:20:13.367 { 00:20:13.367 "name": "BaseBdev4", 00:20:13.367 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:13.367 "is_configured": true, 00:20:13.367 "data_offset": 0, 00:20:13.367 "data_size": 65536 00:20:13.367 } 00:20:13.367 ] 00:20:13.367 }' 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.367 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.368 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.368 20:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.368 20:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.627 20:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.627 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.627 "name": "raid_bdev1", 00:20:13.627 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:13.627 "strip_size_kb": 64, 00:20:13.627 "state": "online", 00:20:13.627 "raid_level": "raid5f", 00:20:13.627 "superblock": false, 00:20:13.627 "num_base_bdevs": 4, 00:20:13.627 "num_base_bdevs_discovered": 4, 00:20:13.627 "num_base_bdevs_operational": 4, 00:20:13.627 "base_bdevs_list": [ 00:20:13.627 { 00:20:13.627 "name": "spare", 00:20:13.627 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 }, 00:20:13.627 { 00:20:13.627 "name": "BaseBdev2", 00:20:13.627 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 }, 00:20:13.627 { 00:20:13.627 "name": "BaseBdev3", 00:20:13.627 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 }, 00:20:13.627 { 00:20:13.627 "name": "BaseBdev4", 00:20:13.627 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 } 00:20:13.627 ] 00:20:13.627 }' 00:20:13.627 20:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.627 "name": "raid_bdev1", 00:20:13.627 "uuid": "49eb36ae-77a2-4bfd-9f24-baeea8149754", 00:20:13.627 "strip_size_kb": 64, 00:20:13.627 "state": "online", 00:20:13.627 "raid_level": "raid5f", 00:20:13.627 "superblock": false, 00:20:13.627 "num_base_bdevs": 4, 00:20:13.627 "num_base_bdevs_discovered": 4, 00:20:13.627 "num_base_bdevs_operational": 4, 00:20:13.627 "base_bdevs_list": [ 00:20:13.627 { 00:20:13.627 "name": "spare", 00:20:13.627 "uuid": "da6464a3-b983-5aaa-b314-24f4ab57f761", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 }, 00:20:13.627 { 00:20:13.627 "name": "BaseBdev2", 00:20:13.627 "uuid": "ffb024ac-4e5a-5819-9b81-0937f8c94d2e", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 }, 00:20:13.627 { 00:20:13.627 "name": "BaseBdev3", 00:20:13.627 "uuid": "75648691-8543-5ee8-ab5f-9feedbd4762a", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 }, 00:20:13.627 { 00:20:13.627 "name": "BaseBdev4", 00:20:13.627 "uuid": "2dce3deb-487d-56fa-853a-25331e392cb4", 00:20:13.627 "is_configured": true, 00:20:13.627 "data_offset": 0, 00:20:13.627 "data_size": 65536 00:20:13.627 } 00:20:13.627 ] 00:20:13.627 }' 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.627 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.195 [2024-11-26 20:33:07.548268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.195 [2024-11-26 20:33:07.548311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.195 [2024-11-26 20:33:07.548417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.195 [2024-11-26 20:33:07.548527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.195 [2024-11-26 20:33:07.548546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:14.195 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:14.455 /dev/nbd0 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.455 1+0 records in 00:20:14.455 1+0 records out 00:20:14.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322297 s, 12.7 MB/s 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:14.455 20:33:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:14.715 /dev/nbd1 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.715 1+0 records in 00:20:14.715 1+0 records out 00:20:14.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447102 s, 9.2 MB/s 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:14.715 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.974 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:15.234 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85074 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85074 ']' 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85074 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85074 00:20:15.493 killing process with pid 85074 00:20:15.493 Received shutdown signal, test time was about 60.000000 seconds 00:20:15.493 00:20:15.493 Latency(us) 00:20:15.493 [2024-11-26T20:33:09.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.493 [2024-11-26T20:33:09.048Z] =================================================================================================================== 00:20:15.493 [2024-11-26T20:33:09.048Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85074' 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85074 00:20:15.493 [2024-11-26 20:33:08.850403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:15.493 20:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85074 00:20:16.058 [2024-11-26 20:33:09.365153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.997 20:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:16.997 00:20:16.997 real 0m20.523s 00:20:16.997 user 0m24.649s 00:20:16.997 sys 0m2.339s 00:20:16.997 20:33:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.997 20:33:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.997 ************************************ 00:20:16.997 END TEST raid5f_rebuild_test 00:20:16.997 ************************************ 00:20:17.257 20:33:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:20:17.257 20:33:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:17.257 20:33:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.257 20:33:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:17.257 ************************************ 00:20:17.257 START TEST raid5f_rebuild_test_sb 00:20:17.257 ************************************ 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85597 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85597 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85597 ']' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:17.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.257 20:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:17.257 Zero copy mechanism will not be used. 00:20:17.257 [2024-11-26 20:33:10.667174] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:20:17.257 [2024-11-26 20:33:10.667319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85597 ] 00:20:17.517 [2024-11-26 20:33:10.842624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.517 [2024-11-26 20:33:10.952769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.777 [2024-11-26 20:33:11.161707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.777 [2024-11-26 20:33:11.161746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:18.037 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.038 BaseBdev1_malloc 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.038 [2024-11-26 20:33:11.550388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:18.038 [2024-11-26 20:33:11.550457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.038 [2024-11-26 20:33:11.550485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:18.038 [2024-11-26 20:33:11.550498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.038 [2024-11-26 20:33:11.552846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.038 [2024-11-26 20:33:11.552888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:18.038 BaseBdev1 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.038 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.298 BaseBdev2_malloc 00:20:18.298 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.298 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:18.298 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.298 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.298 [2024-11-26 20:33:11.609429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:18.299 [2024-11-26 20:33:11.609501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.299 [2024-11-26 20:33:11.609527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:18.299 [2024-11-26 20:33:11.609539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.299 [2024-11-26 20:33:11.611965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.299 [2024-11-26 20:33:11.612008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:18.299 BaseBdev2 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 BaseBdev3_malloc 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 [2024-11-26 20:33:11.683069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:18.299 [2024-11-26 20:33:11.683146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.299 [2024-11-26 20:33:11.683170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:18.299 [2024-11-26 20:33:11.683181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.299 [2024-11-26 20:33:11.685466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.299 [2024-11-26 20:33:11.685507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:18.299 BaseBdev3 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 BaseBdev4_malloc 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 [2024-11-26 20:33:11.745107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:18.299 [2024-11-26 20:33:11.745178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.299 [2024-11-26 20:33:11.745202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:18.299 [2024-11-26 20:33:11.745215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.299 [2024-11-26 20:33:11.747556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.299 [2024-11-26 20:33:11.747600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:18.299 BaseBdev4 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 spare_malloc 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 spare_delay 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 [2024-11-26 20:33:11.811487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:18.299 [2024-11-26 20:33:11.811548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.299 [2024-11-26 20:33:11.811585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:18.299 [2024-11-26 20:33:11.811598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.299 [2024-11-26 20:33:11.814050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.299 [2024-11-26 20:33:11.814093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:18.299 spare 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 [2024-11-26 20:33:11.819528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.299 [2024-11-26 20:33:11.821567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.299 [2024-11-26 20:33:11.821642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:18.299 [2024-11-26 20:33:11.821700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:18.299 [2024-11-26 20:33:11.821938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:18.299 [2024-11-26 20:33:11.821964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:18.299 [2024-11-26 20:33:11.822257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:18.299 [2024-11-26 20:33:11.830083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:18.299 [2024-11-26 20:33:11.830108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:18.299 [2024-11-26 20:33:11.830329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.299 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.559 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.559 "name": "raid_bdev1", 00:20:18.559 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:18.559 "strip_size_kb": 64, 00:20:18.559 "state": "online", 00:20:18.559 "raid_level": "raid5f", 00:20:18.559 "superblock": true, 00:20:18.559 "num_base_bdevs": 4, 00:20:18.559 "num_base_bdevs_discovered": 4, 00:20:18.559 "num_base_bdevs_operational": 4, 00:20:18.559 "base_bdevs_list": [ 00:20:18.559 { 00:20:18.559 "name": "BaseBdev1", 00:20:18.559 "uuid": "1f97683f-f5d8-5f35-8b40-4054395b3e28", 00:20:18.559 "is_configured": true, 00:20:18.559 "data_offset": 2048, 00:20:18.559 "data_size": 63488 00:20:18.559 }, 00:20:18.559 { 00:20:18.559 "name": "BaseBdev2", 00:20:18.559 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:18.559 "is_configured": true, 00:20:18.559 "data_offset": 2048, 00:20:18.559 "data_size": 63488 00:20:18.559 }, 00:20:18.559 { 00:20:18.559 "name": "BaseBdev3", 00:20:18.559 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:18.559 "is_configured": true, 00:20:18.559 "data_offset": 2048, 00:20:18.559 "data_size": 63488 00:20:18.559 }, 00:20:18.559 { 00:20:18.559 "name": "BaseBdev4", 00:20:18.559 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:18.559 "is_configured": true, 00:20:18.559 "data_offset": 2048, 00:20:18.559 "data_size": 63488 00:20:18.559 } 00:20:18.559 ] 00:20:18.559 }' 00:20:18.559 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.559 20:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.818 [2024-11-26 20:33:12.254867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.818 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:19.076 [2024-11-26 20:33:12.518301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:19.076 /dev/nbd0 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.076 1+0 records in 00:20:19.076 1+0 records out 00:20:19.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269981 s, 15.2 MB/s 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:19.076 20:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:20:19.644 496+0 records in 00:20:19.644 496+0 records out 00:20:19.644 97517568 bytes (98 MB, 93 MiB) copied, 0.544209 s, 179 MB/s 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.644 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:19.904 [2024-11-26 20:33:13.326676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.904 [2024-11-26 20:33:13.346550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.904 "name": "raid_bdev1", 00:20:19.904 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:19.904 "strip_size_kb": 64, 00:20:19.904 "state": "online", 00:20:19.904 "raid_level": "raid5f", 00:20:19.904 "superblock": true, 00:20:19.904 "num_base_bdevs": 4, 00:20:19.904 "num_base_bdevs_discovered": 3, 00:20:19.904 "num_base_bdevs_operational": 3, 00:20:19.904 "base_bdevs_list": [ 00:20:19.904 { 00:20:19.904 "name": null, 00:20:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.904 "is_configured": false, 00:20:19.904 "data_offset": 0, 00:20:19.904 "data_size": 63488 00:20:19.904 }, 00:20:19.904 { 00:20:19.904 "name": "BaseBdev2", 00:20:19.904 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:19.904 "is_configured": true, 00:20:19.904 "data_offset": 2048, 00:20:19.904 "data_size": 63488 00:20:19.904 }, 00:20:19.904 { 00:20:19.904 "name": "BaseBdev3", 00:20:19.904 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:19.904 "is_configured": true, 00:20:19.904 "data_offset": 2048, 00:20:19.904 "data_size": 63488 00:20:19.904 }, 00:20:19.904 { 00:20:19.904 "name": "BaseBdev4", 00:20:19.904 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:19.904 "is_configured": true, 00:20:19.904 "data_offset": 2048, 00:20:19.904 "data_size": 63488 00:20:19.904 } 00:20:19.904 ] 00:20:19.904 }' 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.904 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.473 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:20.473 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.473 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.473 [2024-11-26 20:33:13.777883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.473 [2024-11-26 20:33:13.798207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:20:20.473 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.473 20:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:20.473 [2024-11-26 20:33:13.809538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.417 "name": "raid_bdev1", 00:20:21.417 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:21.417 "strip_size_kb": 64, 00:20:21.417 "state": "online", 00:20:21.417 "raid_level": "raid5f", 00:20:21.417 "superblock": true, 00:20:21.417 "num_base_bdevs": 4, 00:20:21.417 "num_base_bdevs_discovered": 4, 00:20:21.417 "num_base_bdevs_operational": 4, 00:20:21.417 "process": { 00:20:21.417 "type": "rebuild", 00:20:21.417 "target": "spare", 00:20:21.417 "progress": { 00:20:21.417 "blocks": 17280, 00:20:21.417 "percent": 9 00:20:21.417 } 00:20:21.417 }, 00:20:21.417 "base_bdevs_list": [ 00:20:21.417 { 00:20:21.417 "name": "spare", 00:20:21.417 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:21.417 "is_configured": true, 00:20:21.417 "data_offset": 2048, 00:20:21.417 "data_size": 63488 00:20:21.417 }, 00:20:21.417 { 00:20:21.417 "name": "BaseBdev2", 00:20:21.417 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:21.417 "is_configured": true, 00:20:21.417 "data_offset": 2048, 00:20:21.417 "data_size": 63488 00:20:21.417 }, 00:20:21.417 { 00:20:21.417 "name": "BaseBdev3", 00:20:21.417 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:21.417 "is_configured": true, 00:20:21.417 "data_offset": 2048, 00:20:21.417 "data_size": 63488 00:20:21.417 }, 00:20:21.417 { 00:20:21.417 "name": "BaseBdev4", 00:20:21.417 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:21.417 "is_configured": true, 00:20:21.417 "data_offset": 2048, 00:20:21.417 "data_size": 63488 00:20:21.417 } 00:20:21.417 ] 00:20:21.417 }' 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.417 20:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.417 [2024-11-26 20:33:14.964496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.689 [2024-11-26 20:33:15.018435] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.689 [2024-11-26 20:33:15.018615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.689 [2024-11-26 20:33:15.018639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.689 [2024-11-26 20:33:15.018651] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.689 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.689 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:21.689 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.690 "name": "raid_bdev1", 00:20:21.690 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:21.690 "strip_size_kb": 64, 00:20:21.690 "state": "online", 00:20:21.690 "raid_level": "raid5f", 00:20:21.690 "superblock": true, 00:20:21.690 "num_base_bdevs": 4, 00:20:21.690 "num_base_bdevs_discovered": 3, 00:20:21.690 "num_base_bdevs_operational": 3, 00:20:21.690 "base_bdevs_list": [ 00:20:21.690 { 00:20:21.690 "name": null, 00:20:21.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.690 "is_configured": false, 00:20:21.690 "data_offset": 0, 00:20:21.690 "data_size": 63488 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "name": "BaseBdev2", 00:20:21.690 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:21.690 "is_configured": true, 00:20:21.690 "data_offset": 2048, 00:20:21.690 "data_size": 63488 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "name": "BaseBdev3", 00:20:21.690 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:21.690 "is_configured": true, 00:20:21.690 "data_offset": 2048, 00:20:21.690 "data_size": 63488 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "name": "BaseBdev4", 00:20:21.690 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:21.690 "is_configured": true, 00:20:21.690 "data_offset": 2048, 00:20:21.690 "data_size": 63488 00:20:21.690 } 00:20:21.690 ] 00:20:21.690 }' 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.690 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.259 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.259 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.259 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:22.259 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:22.259 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.260 "name": "raid_bdev1", 00:20:22.260 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:22.260 "strip_size_kb": 64, 00:20:22.260 "state": "online", 00:20:22.260 "raid_level": "raid5f", 00:20:22.260 "superblock": true, 00:20:22.260 "num_base_bdevs": 4, 00:20:22.260 "num_base_bdevs_discovered": 3, 00:20:22.260 "num_base_bdevs_operational": 3, 00:20:22.260 "base_bdevs_list": [ 00:20:22.260 { 00:20:22.260 "name": null, 00:20:22.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.260 "is_configured": false, 00:20:22.260 "data_offset": 0, 00:20:22.260 "data_size": 63488 00:20:22.260 }, 00:20:22.260 { 00:20:22.260 "name": "BaseBdev2", 00:20:22.260 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:22.260 "is_configured": true, 00:20:22.260 "data_offset": 2048, 00:20:22.260 "data_size": 63488 00:20:22.260 }, 00:20:22.260 { 00:20:22.260 "name": "BaseBdev3", 00:20:22.260 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:22.260 "is_configured": true, 00:20:22.260 "data_offset": 2048, 00:20:22.260 "data_size": 63488 00:20:22.260 }, 00:20:22.260 { 00:20:22.260 "name": "BaseBdev4", 00:20:22.260 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:22.260 "is_configured": true, 00:20:22.260 "data_offset": 2048, 00:20:22.260 "data_size": 63488 00:20:22.260 } 00:20:22.260 ] 00:20:22.260 }' 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.260 [2024-11-26 20:33:15.655127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.260 [2024-11-26 20:33:15.671357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.260 20:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:22.260 [2024-11-26 20:33:15.681712] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.198 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.198 "name": "raid_bdev1", 00:20:23.198 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:23.198 "strip_size_kb": 64, 00:20:23.198 "state": "online", 00:20:23.198 "raid_level": "raid5f", 00:20:23.198 "superblock": true, 00:20:23.198 "num_base_bdevs": 4, 00:20:23.198 "num_base_bdevs_discovered": 4, 00:20:23.198 "num_base_bdevs_operational": 4, 00:20:23.198 "process": { 00:20:23.198 "type": "rebuild", 00:20:23.198 "target": "spare", 00:20:23.198 "progress": { 00:20:23.198 "blocks": 19200, 00:20:23.198 "percent": 10 00:20:23.198 } 00:20:23.198 }, 00:20:23.198 "base_bdevs_list": [ 00:20:23.198 { 00:20:23.198 "name": "spare", 00:20:23.198 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:23.198 "is_configured": true, 00:20:23.198 "data_offset": 2048, 00:20:23.198 "data_size": 63488 00:20:23.198 }, 00:20:23.198 { 00:20:23.198 "name": "BaseBdev2", 00:20:23.198 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:23.198 "is_configured": true, 00:20:23.198 "data_offset": 2048, 00:20:23.198 "data_size": 63488 00:20:23.198 }, 00:20:23.198 { 00:20:23.198 "name": "BaseBdev3", 00:20:23.198 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:23.198 "is_configured": true, 00:20:23.198 "data_offset": 2048, 00:20:23.198 "data_size": 63488 00:20:23.198 }, 00:20:23.199 { 00:20:23.199 "name": "BaseBdev4", 00:20:23.199 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:23.199 "is_configured": true, 00:20:23.199 "data_offset": 2048, 00:20:23.199 "data_size": 63488 00:20:23.199 } 00:20:23.199 ] 00:20:23.199 }' 00:20:23.199 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:23.458 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.458 "name": "raid_bdev1", 00:20:23.458 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:23.458 "strip_size_kb": 64, 00:20:23.458 "state": "online", 00:20:23.458 "raid_level": "raid5f", 00:20:23.458 "superblock": true, 00:20:23.458 "num_base_bdevs": 4, 00:20:23.458 "num_base_bdevs_discovered": 4, 00:20:23.458 "num_base_bdevs_operational": 4, 00:20:23.458 "process": { 00:20:23.458 "type": "rebuild", 00:20:23.458 "target": "spare", 00:20:23.458 "progress": { 00:20:23.458 "blocks": 21120, 00:20:23.458 "percent": 11 00:20:23.458 } 00:20:23.458 }, 00:20:23.458 "base_bdevs_list": [ 00:20:23.458 { 00:20:23.458 "name": "spare", 00:20:23.458 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:23.458 "is_configured": true, 00:20:23.458 "data_offset": 2048, 00:20:23.458 "data_size": 63488 00:20:23.458 }, 00:20:23.458 { 00:20:23.458 "name": "BaseBdev2", 00:20:23.458 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:23.458 "is_configured": true, 00:20:23.458 "data_offset": 2048, 00:20:23.458 "data_size": 63488 00:20:23.458 }, 00:20:23.458 { 00:20:23.458 "name": "BaseBdev3", 00:20:23.458 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:23.458 "is_configured": true, 00:20:23.458 "data_offset": 2048, 00:20:23.458 "data_size": 63488 00:20:23.458 }, 00:20:23.458 { 00:20:23.458 "name": "BaseBdev4", 00:20:23.458 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:23.458 "is_configured": true, 00:20:23.458 "data_offset": 2048, 00:20:23.458 "data_size": 63488 00:20:23.458 } 00:20:23.458 ] 00:20:23.458 }' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.458 20:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.836 20:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.836 "name": "raid_bdev1", 00:20:24.836 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:24.836 "strip_size_kb": 64, 00:20:24.836 "state": "online", 00:20:24.836 "raid_level": "raid5f", 00:20:24.836 "superblock": true, 00:20:24.836 "num_base_bdevs": 4, 00:20:24.836 "num_base_bdevs_discovered": 4, 00:20:24.836 "num_base_bdevs_operational": 4, 00:20:24.836 "process": { 00:20:24.836 "type": "rebuild", 00:20:24.836 "target": "spare", 00:20:24.836 "progress": { 00:20:24.836 "blocks": 44160, 00:20:24.836 "percent": 23 00:20:24.836 } 00:20:24.836 }, 00:20:24.836 "base_bdevs_list": [ 00:20:24.836 { 00:20:24.836 "name": "spare", 00:20:24.836 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:24.836 "is_configured": true, 00:20:24.836 "data_offset": 2048, 00:20:24.836 "data_size": 63488 00:20:24.836 }, 00:20:24.836 { 00:20:24.836 "name": "BaseBdev2", 00:20:24.836 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:24.836 "is_configured": true, 00:20:24.836 "data_offset": 2048, 00:20:24.836 "data_size": 63488 00:20:24.836 }, 00:20:24.836 { 00:20:24.836 "name": "BaseBdev3", 00:20:24.836 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:24.836 "is_configured": true, 00:20:24.836 "data_offset": 2048, 00:20:24.836 "data_size": 63488 00:20:24.836 }, 00:20:24.836 { 00:20:24.836 "name": "BaseBdev4", 00:20:24.836 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:24.836 "is_configured": true, 00:20:24.836 "data_offset": 2048, 00:20:24.836 "data_size": 63488 00:20:24.836 } 00:20:24.836 ] 00:20:24.836 }' 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.836 20:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.791 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.791 "name": "raid_bdev1", 00:20:25.791 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:25.791 "strip_size_kb": 64, 00:20:25.791 "state": "online", 00:20:25.791 "raid_level": "raid5f", 00:20:25.791 "superblock": true, 00:20:25.791 "num_base_bdevs": 4, 00:20:25.791 "num_base_bdevs_discovered": 4, 00:20:25.791 "num_base_bdevs_operational": 4, 00:20:25.791 "process": { 00:20:25.791 "type": "rebuild", 00:20:25.791 "target": "spare", 00:20:25.791 "progress": { 00:20:25.791 "blocks": 65280, 00:20:25.791 "percent": 34 00:20:25.791 } 00:20:25.791 }, 00:20:25.791 "base_bdevs_list": [ 00:20:25.791 { 00:20:25.791 "name": "spare", 00:20:25.791 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:25.791 "is_configured": true, 00:20:25.791 "data_offset": 2048, 00:20:25.791 "data_size": 63488 00:20:25.791 }, 00:20:25.791 { 00:20:25.791 "name": "BaseBdev2", 00:20:25.791 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:25.791 "is_configured": true, 00:20:25.791 "data_offset": 2048, 00:20:25.791 "data_size": 63488 00:20:25.791 }, 00:20:25.791 { 00:20:25.791 "name": "BaseBdev3", 00:20:25.791 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:25.791 "is_configured": true, 00:20:25.792 "data_offset": 2048, 00:20:25.792 "data_size": 63488 00:20:25.792 }, 00:20:25.792 { 00:20:25.792 "name": "BaseBdev4", 00:20:25.792 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:25.792 "is_configured": true, 00:20:25.792 "data_offset": 2048, 00:20:25.792 "data_size": 63488 00:20:25.792 } 00:20:25.792 ] 00:20:25.792 }' 00:20:25.792 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.792 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.792 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.792 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.792 20:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.202 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.203 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.203 "name": "raid_bdev1", 00:20:27.203 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:27.203 "strip_size_kb": 64, 00:20:27.203 "state": "online", 00:20:27.203 "raid_level": "raid5f", 00:20:27.203 "superblock": true, 00:20:27.203 "num_base_bdevs": 4, 00:20:27.203 "num_base_bdevs_discovered": 4, 00:20:27.203 "num_base_bdevs_operational": 4, 00:20:27.203 "process": { 00:20:27.203 "type": "rebuild", 00:20:27.203 "target": "spare", 00:20:27.203 "progress": { 00:20:27.203 "blocks": 86400, 00:20:27.203 "percent": 45 00:20:27.203 } 00:20:27.203 }, 00:20:27.203 "base_bdevs_list": [ 00:20:27.203 { 00:20:27.203 "name": "spare", 00:20:27.203 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:27.203 "is_configured": true, 00:20:27.203 "data_offset": 2048, 00:20:27.203 "data_size": 63488 00:20:27.203 }, 00:20:27.203 { 00:20:27.203 "name": "BaseBdev2", 00:20:27.203 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:27.203 "is_configured": true, 00:20:27.203 "data_offset": 2048, 00:20:27.203 "data_size": 63488 00:20:27.203 }, 00:20:27.203 { 00:20:27.203 "name": "BaseBdev3", 00:20:27.203 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:27.203 "is_configured": true, 00:20:27.203 "data_offset": 2048, 00:20:27.203 "data_size": 63488 00:20:27.203 }, 00:20:27.203 { 00:20:27.203 "name": "BaseBdev4", 00:20:27.203 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:27.203 "is_configured": true, 00:20:27.203 "data_offset": 2048, 00:20:27.203 "data_size": 63488 00:20:27.203 } 00:20:27.203 ] 00:20:27.203 }' 00:20:27.203 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.203 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.203 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.203 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.203 20:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.143 "name": "raid_bdev1", 00:20:28.143 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:28.143 "strip_size_kb": 64, 00:20:28.143 "state": "online", 00:20:28.143 "raid_level": "raid5f", 00:20:28.143 "superblock": true, 00:20:28.143 "num_base_bdevs": 4, 00:20:28.143 "num_base_bdevs_discovered": 4, 00:20:28.143 "num_base_bdevs_operational": 4, 00:20:28.143 "process": { 00:20:28.143 "type": "rebuild", 00:20:28.143 "target": "spare", 00:20:28.143 "progress": { 00:20:28.143 "blocks": 109440, 00:20:28.143 "percent": 57 00:20:28.143 } 00:20:28.143 }, 00:20:28.143 "base_bdevs_list": [ 00:20:28.143 { 00:20:28.143 "name": "spare", 00:20:28.143 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:28.143 "is_configured": true, 00:20:28.143 "data_offset": 2048, 00:20:28.143 "data_size": 63488 00:20:28.143 }, 00:20:28.143 { 00:20:28.143 "name": "BaseBdev2", 00:20:28.143 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:28.143 "is_configured": true, 00:20:28.143 "data_offset": 2048, 00:20:28.143 "data_size": 63488 00:20:28.143 }, 00:20:28.143 { 00:20:28.143 "name": "BaseBdev3", 00:20:28.143 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:28.143 "is_configured": true, 00:20:28.143 "data_offset": 2048, 00:20:28.143 "data_size": 63488 00:20:28.143 }, 00:20:28.143 { 00:20:28.143 "name": "BaseBdev4", 00:20:28.143 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:28.143 "is_configured": true, 00:20:28.143 "data_offset": 2048, 00:20:28.143 "data_size": 63488 00:20:28.143 } 00:20:28.143 ] 00:20:28.143 }' 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.143 20:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.083 "name": "raid_bdev1", 00:20:29.083 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:29.083 "strip_size_kb": 64, 00:20:29.083 "state": "online", 00:20:29.083 "raid_level": "raid5f", 00:20:29.083 "superblock": true, 00:20:29.083 "num_base_bdevs": 4, 00:20:29.083 "num_base_bdevs_discovered": 4, 00:20:29.083 "num_base_bdevs_operational": 4, 00:20:29.083 "process": { 00:20:29.083 "type": "rebuild", 00:20:29.083 "target": "spare", 00:20:29.083 "progress": { 00:20:29.083 "blocks": 130560, 00:20:29.083 "percent": 68 00:20:29.083 } 00:20:29.083 }, 00:20:29.083 "base_bdevs_list": [ 00:20:29.083 { 00:20:29.083 "name": "spare", 00:20:29.083 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:29.083 "is_configured": true, 00:20:29.083 "data_offset": 2048, 00:20:29.083 "data_size": 63488 00:20:29.083 }, 00:20:29.083 { 00:20:29.083 "name": "BaseBdev2", 00:20:29.083 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:29.083 "is_configured": true, 00:20:29.083 "data_offset": 2048, 00:20:29.083 "data_size": 63488 00:20:29.083 }, 00:20:29.083 { 00:20:29.083 "name": "BaseBdev3", 00:20:29.083 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:29.083 "is_configured": true, 00:20:29.083 "data_offset": 2048, 00:20:29.083 "data_size": 63488 00:20:29.083 }, 00:20:29.083 { 00:20:29.083 "name": "BaseBdev4", 00:20:29.083 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:29.083 "is_configured": true, 00:20:29.083 "data_offset": 2048, 00:20:29.083 "data_size": 63488 00:20:29.083 } 00:20:29.083 ] 00:20:29.083 }' 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.083 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.342 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.342 20:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.282 "name": "raid_bdev1", 00:20:30.282 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:30.282 "strip_size_kb": 64, 00:20:30.282 "state": "online", 00:20:30.282 "raid_level": "raid5f", 00:20:30.282 "superblock": true, 00:20:30.282 "num_base_bdevs": 4, 00:20:30.282 "num_base_bdevs_discovered": 4, 00:20:30.282 "num_base_bdevs_operational": 4, 00:20:30.282 "process": { 00:20:30.282 "type": "rebuild", 00:20:30.282 "target": "spare", 00:20:30.282 "progress": { 00:20:30.282 "blocks": 151680, 00:20:30.282 "percent": 79 00:20:30.282 } 00:20:30.282 }, 00:20:30.282 "base_bdevs_list": [ 00:20:30.282 { 00:20:30.282 "name": "spare", 00:20:30.282 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:30.282 "is_configured": true, 00:20:30.282 "data_offset": 2048, 00:20:30.282 "data_size": 63488 00:20:30.282 }, 00:20:30.282 { 00:20:30.282 "name": "BaseBdev2", 00:20:30.282 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:30.282 "is_configured": true, 00:20:30.282 "data_offset": 2048, 00:20:30.282 "data_size": 63488 00:20:30.282 }, 00:20:30.282 { 00:20:30.282 "name": "BaseBdev3", 00:20:30.282 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:30.282 "is_configured": true, 00:20:30.282 "data_offset": 2048, 00:20:30.282 "data_size": 63488 00:20:30.282 }, 00:20:30.282 { 00:20:30.282 "name": "BaseBdev4", 00:20:30.282 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:30.282 "is_configured": true, 00:20:30.282 "data_offset": 2048, 00:20:30.282 "data_size": 63488 00:20:30.282 } 00:20:30.282 ] 00:20:30.282 }' 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.282 20:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.663 "name": "raid_bdev1", 00:20:31.663 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:31.663 "strip_size_kb": 64, 00:20:31.663 "state": "online", 00:20:31.663 "raid_level": "raid5f", 00:20:31.663 "superblock": true, 00:20:31.663 "num_base_bdevs": 4, 00:20:31.663 "num_base_bdevs_discovered": 4, 00:20:31.663 "num_base_bdevs_operational": 4, 00:20:31.663 "process": { 00:20:31.663 "type": "rebuild", 00:20:31.663 "target": "spare", 00:20:31.663 "progress": { 00:20:31.663 "blocks": 172800, 00:20:31.663 "percent": 90 00:20:31.663 } 00:20:31.663 }, 00:20:31.663 "base_bdevs_list": [ 00:20:31.663 { 00:20:31.663 "name": "spare", 00:20:31.663 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:31.663 "is_configured": true, 00:20:31.663 "data_offset": 2048, 00:20:31.663 "data_size": 63488 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "name": "BaseBdev2", 00:20:31.663 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:31.663 "is_configured": true, 00:20:31.663 "data_offset": 2048, 00:20:31.663 "data_size": 63488 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "name": "BaseBdev3", 00:20:31.663 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:31.663 "is_configured": true, 00:20:31.663 "data_offset": 2048, 00:20:31.663 "data_size": 63488 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "name": "BaseBdev4", 00:20:31.663 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:31.663 "is_configured": true, 00:20:31.663 "data_offset": 2048, 00:20:31.663 "data_size": 63488 00:20:31.663 } 00:20:31.663 ] 00:20:31.663 }' 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.663 20:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:32.232 [2024-11-26 20:33:25.745232] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:32.232 [2024-11-26 20:33:25.745416] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:32.232 [2024-11-26 20:33:25.745597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.492 20:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.492 "name": "raid_bdev1", 00:20:32.492 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:32.492 "strip_size_kb": 64, 00:20:32.492 "state": "online", 00:20:32.492 "raid_level": "raid5f", 00:20:32.492 "superblock": true, 00:20:32.492 "num_base_bdevs": 4, 00:20:32.492 "num_base_bdevs_discovered": 4, 00:20:32.492 "num_base_bdevs_operational": 4, 00:20:32.492 "base_bdevs_list": [ 00:20:32.492 { 00:20:32.492 "name": "spare", 00:20:32.492 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:32.492 "is_configured": true, 00:20:32.492 "data_offset": 2048, 00:20:32.492 "data_size": 63488 00:20:32.492 }, 00:20:32.492 { 00:20:32.492 "name": "BaseBdev2", 00:20:32.492 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:32.492 "is_configured": true, 00:20:32.492 "data_offset": 2048, 00:20:32.492 "data_size": 63488 00:20:32.492 }, 00:20:32.492 { 00:20:32.492 "name": "BaseBdev3", 00:20:32.492 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:32.492 "is_configured": true, 00:20:32.492 "data_offset": 2048, 00:20:32.492 "data_size": 63488 00:20:32.492 }, 00:20:32.492 { 00:20:32.492 "name": "BaseBdev4", 00:20:32.492 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:32.492 "is_configured": true, 00:20:32.492 "data_offset": 2048, 00:20:32.492 "data_size": 63488 00:20:32.492 } 00:20:32.492 ] 00:20:32.492 }' 00:20:32.492 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.752 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.753 "name": "raid_bdev1", 00:20:32.753 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:32.753 "strip_size_kb": 64, 00:20:32.753 "state": "online", 00:20:32.753 "raid_level": "raid5f", 00:20:32.753 "superblock": true, 00:20:32.753 "num_base_bdevs": 4, 00:20:32.753 "num_base_bdevs_discovered": 4, 00:20:32.753 "num_base_bdevs_operational": 4, 00:20:32.753 "base_bdevs_list": [ 00:20:32.753 { 00:20:32.753 "name": "spare", 00:20:32.753 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 }, 00:20:32.753 { 00:20:32.753 "name": "BaseBdev2", 00:20:32.753 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 }, 00:20:32.753 { 00:20:32.753 "name": "BaseBdev3", 00:20:32.753 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 }, 00:20:32.753 { 00:20:32.753 "name": "BaseBdev4", 00:20:32.753 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 } 00:20:32.753 ] 00:20:32.753 }' 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.753 "name": "raid_bdev1", 00:20:32.753 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:32.753 "strip_size_kb": 64, 00:20:32.753 "state": "online", 00:20:32.753 "raid_level": "raid5f", 00:20:32.753 "superblock": true, 00:20:32.753 "num_base_bdevs": 4, 00:20:32.753 "num_base_bdevs_discovered": 4, 00:20:32.753 "num_base_bdevs_operational": 4, 00:20:32.753 "base_bdevs_list": [ 00:20:32.753 { 00:20:32.753 "name": "spare", 00:20:32.753 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 }, 00:20:32.753 { 00:20:32.753 "name": "BaseBdev2", 00:20:32.753 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 }, 00:20:32.753 { 00:20:32.753 "name": "BaseBdev3", 00:20:32.753 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 }, 00:20:32.753 { 00:20:32.753 "name": "BaseBdev4", 00:20:32.753 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:32.753 "is_configured": true, 00:20:32.753 "data_offset": 2048, 00:20:32.753 "data_size": 63488 00:20:32.753 } 00:20:32.753 ] 00:20:32.753 }' 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.753 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.321 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:33.321 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.321 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.321 [2024-11-26 20:33:26.694021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.321 [2024-11-26 20:33:26.694120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.321 [2024-11-26 20:33:26.694251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.322 [2024-11-26 20:33:26.694387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.322 [2024-11-26 20:33:26.694468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.322 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:33.582 /dev/nbd0 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:33.582 20:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:33.582 1+0 records in 00:20:33.582 1+0 records out 00:20:33.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466926 s, 8.8 MB/s 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.582 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:33.842 /dev/nbd1 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:33.842 1+0 records in 00:20:33.842 1+0 records out 00:20:33.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387265 s, 10.6 MB/s 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.842 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.102 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.362 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.622 20:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.622 [2024-11-26 20:33:28.009786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.622 [2024-11-26 20:33:28.009851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.622 [2024-11-26 20:33:28.009878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:34.622 [2024-11-26 20:33:28.009889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.622 [2024-11-26 20:33:28.012543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.622 [2024-11-26 20:33:28.012583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.622 [2024-11-26 20:33:28.012684] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:34.622 [2024-11-26 20:33:28.012757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.622 [2024-11-26 20:33:28.012923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.622 [2024-11-26 20:33:28.013054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:34.622 [2024-11-26 20:33:28.013150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:34.622 spare 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.622 [2024-11-26 20:33:28.113089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:34.622 [2024-11-26 20:33:28.113177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:34.622 [2024-11-26 20:33:28.113534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:34.622 [2024-11-26 20:33:28.121169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:34.622 [2024-11-26 20:33:28.121191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:34.622 [2024-11-26 20:33:28.121408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.622 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.881 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.881 "name": "raid_bdev1", 00:20:34.881 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:34.881 "strip_size_kb": 64, 00:20:34.881 "state": "online", 00:20:34.881 "raid_level": "raid5f", 00:20:34.881 "superblock": true, 00:20:34.881 "num_base_bdevs": 4, 00:20:34.881 "num_base_bdevs_discovered": 4, 00:20:34.881 "num_base_bdevs_operational": 4, 00:20:34.881 "base_bdevs_list": [ 00:20:34.881 { 00:20:34.881 "name": "spare", 00:20:34.881 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:34.881 "is_configured": true, 00:20:34.881 "data_offset": 2048, 00:20:34.882 "data_size": 63488 00:20:34.882 }, 00:20:34.882 { 00:20:34.882 "name": "BaseBdev2", 00:20:34.882 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:34.882 "is_configured": true, 00:20:34.882 "data_offset": 2048, 00:20:34.882 "data_size": 63488 00:20:34.882 }, 00:20:34.882 { 00:20:34.882 "name": "BaseBdev3", 00:20:34.882 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:34.882 "is_configured": true, 00:20:34.882 "data_offset": 2048, 00:20:34.882 "data_size": 63488 00:20:34.882 }, 00:20:34.882 { 00:20:34.882 "name": "BaseBdev4", 00:20:34.882 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:34.882 "is_configured": true, 00:20:34.882 "data_offset": 2048, 00:20:34.882 "data_size": 63488 00:20:34.882 } 00:20:34.882 ] 00:20:34.882 }' 00:20:34.882 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.882 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.141 "name": "raid_bdev1", 00:20:35.141 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:35.141 "strip_size_kb": 64, 00:20:35.141 "state": "online", 00:20:35.141 "raid_level": "raid5f", 00:20:35.141 "superblock": true, 00:20:35.141 "num_base_bdevs": 4, 00:20:35.141 "num_base_bdevs_discovered": 4, 00:20:35.141 "num_base_bdevs_operational": 4, 00:20:35.141 "base_bdevs_list": [ 00:20:35.141 { 00:20:35.141 "name": "spare", 00:20:35.141 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:35.141 "is_configured": true, 00:20:35.141 "data_offset": 2048, 00:20:35.141 "data_size": 63488 00:20:35.141 }, 00:20:35.141 { 00:20:35.141 "name": "BaseBdev2", 00:20:35.141 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:35.141 "is_configured": true, 00:20:35.141 "data_offset": 2048, 00:20:35.141 "data_size": 63488 00:20:35.141 }, 00:20:35.141 { 00:20:35.141 "name": "BaseBdev3", 00:20:35.141 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:35.141 "is_configured": true, 00:20:35.141 "data_offset": 2048, 00:20:35.141 "data_size": 63488 00:20:35.141 }, 00:20:35.141 { 00:20:35.141 "name": "BaseBdev4", 00:20:35.141 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:35.141 "is_configured": true, 00:20:35.141 "data_offset": 2048, 00:20:35.141 "data_size": 63488 00:20:35.141 } 00:20:35.141 ] 00:20:35.141 }' 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.141 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.142 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.401 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.402 [2024-11-26 20:33:28.733638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.402 "name": "raid_bdev1", 00:20:35.402 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:35.402 "strip_size_kb": 64, 00:20:35.402 "state": "online", 00:20:35.402 "raid_level": "raid5f", 00:20:35.402 "superblock": true, 00:20:35.402 "num_base_bdevs": 4, 00:20:35.402 "num_base_bdevs_discovered": 3, 00:20:35.402 "num_base_bdevs_operational": 3, 00:20:35.402 "base_bdevs_list": [ 00:20:35.402 { 00:20:35.402 "name": null, 00:20:35.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.402 "is_configured": false, 00:20:35.402 "data_offset": 0, 00:20:35.402 "data_size": 63488 00:20:35.402 }, 00:20:35.402 { 00:20:35.402 "name": "BaseBdev2", 00:20:35.402 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:35.402 "is_configured": true, 00:20:35.402 "data_offset": 2048, 00:20:35.402 "data_size": 63488 00:20:35.402 }, 00:20:35.402 { 00:20:35.402 "name": "BaseBdev3", 00:20:35.402 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:35.402 "is_configured": true, 00:20:35.402 "data_offset": 2048, 00:20:35.402 "data_size": 63488 00:20:35.402 }, 00:20:35.402 { 00:20:35.402 "name": "BaseBdev4", 00:20:35.402 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:35.402 "is_configured": true, 00:20:35.402 "data_offset": 2048, 00:20:35.402 "data_size": 63488 00:20:35.402 } 00:20:35.402 ] 00:20:35.402 }' 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.402 20:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.662 20:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.662 20:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.662 20:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.662 [2024-11-26 20:33:29.201009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.662 [2024-11-26 20:33:29.201315] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:35.662 [2024-11-26 20:33:29.201399] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:35.662 [2024-11-26 20:33:29.201472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.921 [2024-11-26 20:33:29.219953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:35.921 20:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.921 20:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:35.921 [2024-11-26 20:33:29.231097] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.859 "name": "raid_bdev1", 00:20:36.859 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:36.859 "strip_size_kb": 64, 00:20:36.859 "state": "online", 00:20:36.859 "raid_level": "raid5f", 00:20:36.859 "superblock": true, 00:20:36.859 "num_base_bdevs": 4, 00:20:36.859 "num_base_bdevs_discovered": 4, 00:20:36.859 "num_base_bdevs_operational": 4, 00:20:36.859 "process": { 00:20:36.859 "type": "rebuild", 00:20:36.859 "target": "spare", 00:20:36.859 "progress": { 00:20:36.859 "blocks": 17280, 00:20:36.859 "percent": 9 00:20:36.859 } 00:20:36.859 }, 00:20:36.859 "base_bdevs_list": [ 00:20:36.859 { 00:20:36.859 "name": "spare", 00:20:36.859 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:36.859 "is_configured": true, 00:20:36.859 "data_offset": 2048, 00:20:36.859 "data_size": 63488 00:20:36.859 }, 00:20:36.859 { 00:20:36.859 "name": "BaseBdev2", 00:20:36.859 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:36.859 "is_configured": true, 00:20:36.859 "data_offset": 2048, 00:20:36.859 "data_size": 63488 00:20:36.859 }, 00:20:36.859 { 00:20:36.859 "name": "BaseBdev3", 00:20:36.859 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:36.859 "is_configured": true, 00:20:36.859 "data_offset": 2048, 00:20:36.859 "data_size": 63488 00:20:36.859 }, 00:20:36.859 { 00:20:36.859 "name": "BaseBdev4", 00:20:36.859 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:36.859 "is_configured": true, 00:20:36.859 "data_offset": 2048, 00:20:36.859 "data_size": 63488 00:20:36.859 } 00:20:36.859 ] 00:20:36.859 }' 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.859 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.859 [2024-11-26 20:33:30.390439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.118 [2024-11-26 20:33:30.438444] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.118 [2024-11-26 20:33:30.438510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.118 [2024-11-26 20:33:30.438528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.118 [2024-11-26 20:33:30.438540] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.118 "name": "raid_bdev1", 00:20:37.118 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:37.118 "strip_size_kb": 64, 00:20:37.118 "state": "online", 00:20:37.118 "raid_level": "raid5f", 00:20:37.118 "superblock": true, 00:20:37.118 "num_base_bdevs": 4, 00:20:37.118 "num_base_bdevs_discovered": 3, 00:20:37.118 "num_base_bdevs_operational": 3, 00:20:37.118 "base_bdevs_list": [ 00:20:37.118 { 00:20:37.118 "name": null, 00:20:37.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.118 "is_configured": false, 00:20:37.118 "data_offset": 0, 00:20:37.118 "data_size": 63488 00:20:37.118 }, 00:20:37.118 { 00:20:37.118 "name": "BaseBdev2", 00:20:37.118 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:37.118 "is_configured": true, 00:20:37.118 "data_offset": 2048, 00:20:37.118 "data_size": 63488 00:20:37.118 }, 00:20:37.118 { 00:20:37.118 "name": "BaseBdev3", 00:20:37.118 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:37.118 "is_configured": true, 00:20:37.118 "data_offset": 2048, 00:20:37.118 "data_size": 63488 00:20:37.118 }, 00:20:37.118 { 00:20:37.118 "name": "BaseBdev4", 00:20:37.118 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:37.118 "is_configured": true, 00:20:37.118 "data_offset": 2048, 00:20:37.118 "data_size": 63488 00:20:37.118 } 00:20:37.118 ] 00:20:37.118 }' 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.118 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.377 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:37.377 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.377 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.636 [2024-11-26 20:33:30.933507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:37.636 [2024-11-26 20:33:30.933657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.636 [2024-11-26 20:33:30.933713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:37.636 [2024-11-26 20:33:30.933760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.636 [2024-11-26 20:33:30.934391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.636 [2024-11-26 20:33:30.934468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:37.636 [2024-11-26 20:33:30.934621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:37.636 [2024-11-26 20:33:30.934675] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:37.636 [2024-11-26 20:33:30.934742] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:37.636 [2024-11-26 20:33:30.934811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:37.636 [2024-11-26 20:33:30.952671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:37.636 spare 00:20:37.636 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.636 20:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:37.636 [2024-11-26 20:33:30.964180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.576 20:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.576 "name": "raid_bdev1", 00:20:38.576 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:38.576 "strip_size_kb": 64, 00:20:38.576 "state": "online", 00:20:38.576 "raid_level": "raid5f", 00:20:38.576 "superblock": true, 00:20:38.576 "num_base_bdevs": 4, 00:20:38.576 "num_base_bdevs_discovered": 4, 00:20:38.576 "num_base_bdevs_operational": 4, 00:20:38.576 "process": { 00:20:38.576 "type": "rebuild", 00:20:38.576 "target": "spare", 00:20:38.576 "progress": { 00:20:38.576 "blocks": 17280, 00:20:38.576 "percent": 9 00:20:38.576 } 00:20:38.576 }, 00:20:38.576 "base_bdevs_list": [ 00:20:38.576 { 00:20:38.576 "name": "spare", 00:20:38.576 "uuid": "3d251d9a-b6e4-53ba-b957-c9d281303cad", 00:20:38.576 "is_configured": true, 00:20:38.576 "data_offset": 2048, 00:20:38.576 "data_size": 63488 00:20:38.576 }, 00:20:38.576 { 00:20:38.576 "name": "BaseBdev2", 00:20:38.576 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:38.576 "is_configured": true, 00:20:38.576 "data_offset": 2048, 00:20:38.576 "data_size": 63488 00:20:38.576 }, 00:20:38.576 { 00:20:38.576 "name": "BaseBdev3", 00:20:38.576 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:38.576 "is_configured": true, 00:20:38.576 "data_offset": 2048, 00:20:38.576 "data_size": 63488 00:20:38.576 }, 00:20:38.576 { 00:20:38.576 "name": "BaseBdev4", 00:20:38.576 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:38.576 "is_configured": true, 00:20:38.576 "data_offset": 2048, 00:20:38.576 "data_size": 63488 00:20:38.576 } 00:20:38.576 ] 00:20:38.576 }' 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.576 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.576 [2024-11-26 20:33:32.111847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.836 [2024-11-26 20:33:32.174017] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:38.836 [2024-11-26 20:33:32.174142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.836 [2024-11-26 20:33:32.174165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.836 [2024-11-26 20:33:32.174174] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.836 "name": "raid_bdev1", 00:20:38.836 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:38.836 "strip_size_kb": 64, 00:20:38.836 "state": "online", 00:20:38.836 "raid_level": "raid5f", 00:20:38.836 "superblock": true, 00:20:38.836 "num_base_bdevs": 4, 00:20:38.836 "num_base_bdevs_discovered": 3, 00:20:38.836 "num_base_bdevs_operational": 3, 00:20:38.836 "base_bdevs_list": [ 00:20:38.836 { 00:20:38.836 "name": null, 00:20:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.836 "is_configured": false, 00:20:38.836 "data_offset": 0, 00:20:38.836 "data_size": 63488 00:20:38.836 }, 00:20:38.836 { 00:20:38.836 "name": "BaseBdev2", 00:20:38.836 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:38.836 "is_configured": true, 00:20:38.836 "data_offset": 2048, 00:20:38.836 "data_size": 63488 00:20:38.836 }, 00:20:38.836 { 00:20:38.836 "name": "BaseBdev3", 00:20:38.836 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:38.836 "is_configured": true, 00:20:38.836 "data_offset": 2048, 00:20:38.836 "data_size": 63488 00:20:38.836 }, 00:20:38.836 { 00:20:38.836 "name": "BaseBdev4", 00:20:38.836 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:38.836 "is_configured": true, 00:20:38.836 "data_offset": 2048, 00:20:38.836 "data_size": 63488 00:20:38.836 } 00:20:38.836 ] 00:20:38.836 }' 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.836 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.096 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.355 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.355 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.355 "name": "raid_bdev1", 00:20:39.355 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:39.356 "strip_size_kb": 64, 00:20:39.356 "state": "online", 00:20:39.356 "raid_level": "raid5f", 00:20:39.356 "superblock": true, 00:20:39.356 "num_base_bdevs": 4, 00:20:39.356 "num_base_bdevs_discovered": 3, 00:20:39.356 "num_base_bdevs_operational": 3, 00:20:39.356 "base_bdevs_list": [ 00:20:39.356 { 00:20:39.356 "name": null, 00:20:39.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.356 "is_configured": false, 00:20:39.356 "data_offset": 0, 00:20:39.356 "data_size": 63488 00:20:39.356 }, 00:20:39.356 { 00:20:39.356 "name": "BaseBdev2", 00:20:39.356 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:39.356 "is_configured": true, 00:20:39.356 "data_offset": 2048, 00:20:39.356 "data_size": 63488 00:20:39.356 }, 00:20:39.356 { 00:20:39.356 "name": "BaseBdev3", 00:20:39.356 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:39.356 "is_configured": true, 00:20:39.356 "data_offset": 2048, 00:20:39.356 "data_size": 63488 00:20:39.356 }, 00:20:39.356 { 00:20:39.356 "name": "BaseBdev4", 00:20:39.356 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:39.356 "is_configured": true, 00:20:39.356 "data_offset": 2048, 00:20:39.356 "data_size": 63488 00:20:39.356 } 00:20:39.356 ] 00:20:39.356 }' 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.356 [2024-11-26 20:33:32.777339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:39.356 [2024-11-26 20:33:32.777470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.356 [2024-11-26 20:33:32.777504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:39.356 [2024-11-26 20:33:32.777518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.356 [2024-11-26 20:33:32.778104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.356 [2024-11-26 20:33:32.778139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:39.356 [2024-11-26 20:33:32.778244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:39.356 [2024-11-26 20:33:32.778284] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:39.356 [2024-11-26 20:33:32.778300] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:39.356 [2024-11-26 20:33:32.778311] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:39.356 BaseBdev1 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.356 20:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.292 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.293 "name": "raid_bdev1", 00:20:40.293 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:40.293 "strip_size_kb": 64, 00:20:40.293 "state": "online", 00:20:40.293 "raid_level": "raid5f", 00:20:40.293 "superblock": true, 00:20:40.293 "num_base_bdevs": 4, 00:20:40.293 "num_base_bdevs_discovered": 3, 00:20:40.293 "num_base_bdevs_operational": 3, 00:20:40.293 "base_bdevs_list": [ 00:20:40.293 { 00:20:40.293 "name": null, 00:20:40.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.293 "is_configured": false, 00:20:40.293 "data_offset": 0, 00:20:40.293 "data_size": 63488 00:20:40.293 }, 00:20:40.293 { 00:20:40.293 "name": "BaseBdev2", 00:20:40.293 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:40.293 "is_configured": true, 00:20:40.293 "data_offset": 2048, 00:20:40.293 "data_size": 63488 00:20:40.293 }, 00:20:40.293 { 00:20:40.293 "name": "BaseBdev3", 00:20:40.293 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:40.293 "is_configured": true, 00:20:40.293 "data_offset": 2048, 00:20:40.293 "data_size": 63488 00:20:40.293 }, 00:20:40.293 { 00:20:40.293 "name": "BaseBdev4", 00:20:40.293 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:40.293 "is_configured": true, 00:20:40.293 "data_offset": 2048, 00:20:40.293 "data_size": 63488 00:20:40.293 } 00:20:40.293 ] 00:20:40.293 }' 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.293 20:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.860 "name": "raid_bdev1", 00:20:40.860 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:40.860 "strip_size_kb": 64, 00:20:40.860 "state": "online", 00:20:40.860 "raid_level": "raid5f", 00:20:40.860 "superblock": true, 00:20:40.860 "num_base_bdevs": 4, 00:20:40.860 "num_base_bdevs_discovered": 3, 00:20:40.860 "num_base_bdevs_operational": 3, 00:20:40.860 "base_bdevs_list": [ 00:20:40.860 { 00:20:40.860 "name": null, 00:20:40.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.860 "is_configured": false, 00:20:40.860 "data_offset": 0, 00:20:40.860 "data_size": 63488 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "name": "BaseBdev2", 00:20:40.860 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:40.860 "is_configured": true, 00:20:40.860 "data_offset": 2048, 00:20:40.860 "data_size": 63488 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "name": "BaseBdev3", 00:20:40.860 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:40.860 "is_configured": true, 00:20:40.860 "data_offset": 2048, 00:20:40.860 "data_size": 63488 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "name": "BaseBdev4", 00:20:40.860 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:40.860 "is_configured": true, 00:20:40.860 "data_offset": 2048, 00:20:40.860 "data_size": 63488 00:20:40.860 } 00:20:40.860 ] 00:20:40.860 }' 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.860 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.861 [2024-11-26 20:33:34.358874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.861 [2024-11-26 20:33:34.359126] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:40.861 [2024-11-26 20:33:34.359201] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:40.861 request: 00:20:40.861 { 00:20:40.861 "base_bdev": "BaseBdev1", 00:20:40.861 "raid_bdev": "raid_bdev1", 00:20:40.861 "method": "bdev_raid_add_base_bdev", 00:20:40.861 "req_id": 1 00:20:40.861 } 00:20:40.861 Got JSON-RPC error response 00:20:40.861 response: 00:20:40.861 { 00:20:40.861 "code": -22, 00:20:40.861 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:40.861 } 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.861 20:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.238 "name": "raid_bdev1", 00:20:42.238 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:42.238 "strip_size_kb": 64, 00:20:42.238 "state": "online", 00:20:42.238 "raid_level": "raid5f", 00:20:42.238 "superblock": true, 00:20:42.238 "num_base_bdevs": 4, 00:20:42.238 "num_base_bdevs_discovered": 3, 00:20:42.238 "num_base_bdevs_operational": 3, 00:20:42.238 "base_bdevs_list": [ 00:20:42.238 { 00:20:42.238 "name": null, 00:20:42.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.238 "is_configured": false, 00:20:42.238 "data_offset": 0, 00:20:42.238 "data_size": 63488 00:20:42.238 }, 00:20:42.238 { 00:20:42.238 "name": "BaseBdev2", 00:20:42.238 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:42.238 "is_configured": true, 00:20:42.238 "data_offset": 2048, 00:20:42.238 "data_size": 63488 00:20:42.238 }, 00:20:42.238 { 00:20:42.238 "name": "BaseBdev3", 00:20:42.238 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:42.238 "is_configured": true, 00:20:42.238 "data_offset": 2048, 00:20:42.238 "data_size": 63488 00:20:42.238 }, 00:20:42.238 { 00:20:42.238 "name": "BaseBdev4", 00:20:42.238 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:42.238 "is_configured": true, 00:20:42.238 "data_offset": 2048, 00:20:42.238 "data_size": 63488 00:20:42.238 } 00:20:42.238 ] 00:20:42.238 }' 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.238 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.497 "name": "raid_bdev1", 00:20:42.497 "uuid": "2830e8f4-8a42-42b1-ab85-ddf14c4fe86b", 00:20:42.497 "strip_size_kb": 64, 00:20:42.497 "state": "online", 00:20:42.497 "raid_level": "raid5f", 00:20:42.497 "superblock": true, 00:20:42.497 "num_base_bdevs": 4, 00:20:42.497 "num_base_bdevs_discovered": 3, 00:20:42.497 "num_base_bdevs_operational": 3, 00:20:42.497 "base_bdevs_list": [ 00:20:42.497 { 00:20:42.497 "name": null, 00:20:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.497 "is_configured": false, 00:20:42.497 "data_offset": 0, 00:20:42.497 "data_size": 63488 00:20:42.497 }, 00:20:42.497 { 00:20:42.497 "name": "BaseBdev2", 00:20:42.497 "uuid": "c8a5178e-82e3-5b37-b295-34184e9e8f3b", 00:20:42.497 "is_configured": true, 00:20:42.497 "data_offset": 2048, 00:20:42.497 "data_size": 63488 00:20:42.497 }, 00:20:42.497 { 00:20:42.497 "name": "BaseBdev3", 00:20:42.497 "uuid": "5847fefb-47f4-5894-9fd7-35dfb47266ab", 00:20:42.497 "is_configured": true, 00:20:42.497 "data_offset": 2048, 00:20:42.497 "data_size": 63488 00:20:42.497 }, 00:20:42.497 { 00:20:42.497 "name": "BaseBdev4", 00:20:42.497 "uuid": "b8e99c6a-6b78-566d-8850-897e38fafe60", 00:20:42.497 "is_configured": true, 00:20:42.497 "data_offset": 2048, 00:20:42.497 "data_size": 63488 00:20:42.497 } 00:20:42.497 ] 00:20:42.497 }' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85597 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85597 ']' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85597 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85597 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85597' 00:20:42.497 killing process with pid 85597 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85597 00:20:42.497 Received shutdown signal, test time was about 60.000000 seconds 00:20:42.497 00:20:42.497 Latency(us) 00:20:42.497 [2024-11-26T20:33:36.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.497 [2024-11-26T20:33:36.052Z] =================================================================================================================== 00:20:42.497 [2024-11-26T20:33:36.052Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.497 [2024-11-26 20:33:35.927639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.497 20:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85597 00:20:42.497 [2024-11-26 20:33:35.927788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.497 [2024-11-26 20:33:35.927877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.497 [2024-11-26 20:33:35.927892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:43.064 [2024-11-26 20:33:36.440409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:44.441 20:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:44.441 00:20:44.441 real 0m27.017s 00:20:44.441 user 0m33.802s 00:20:44.441 sys 0m2.959s 00:20:44.441 20:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.441 20:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.441 ************************************ 00:20:44.441 END TEST raid5f_rebuild_test_sb 00:20:44.441 ************************************ 00:20:44.441 20:33:37 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:44.441 20:33:37 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:44.441 20:33:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:44.441 20:33:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.441 20:33:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.441 ************************************ 00:20:44.441 START TEST raid_state_function_test_sb_4k 00:20:44.441 ************************************ 00:20:44.441 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:44.441 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:44.441 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86408 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86408' 00:20:44.442 Process raid pid: 86408 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86408 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86408 ']' 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.442 20:33:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.442 [2024-11-26 20:33:37.740705] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:20:44.442 [2024-11-26 20:33:37.740835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.442 [2024-11-26 20:33:37.914544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.701 [2024-11-26 20:33:38.035340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.701 [2024-11-26 20:33:38.244594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.701 [2024-11-26 20:33:38.244645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.269 [2024-11-26 20:33:38.607506] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.269 [2024-11-26 20:33:38.607569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.269 [2024-11-26 20:33:38.607581] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:45.269 [2024-11-26 20:33:38.607592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.269 "name": "Existed_Raid", 00:20:45.269 "uuid": "e1ee7023-621d-48d3-8432-ac3cb1ee896a", 00:20:45.269 "strip_size_kb": 0, 00:20:45.269 "state": "configuring", 00:20:45.269 "raid_level": "raid1", 00:20:45.269 "superblock": true, 00:20:45.269 "num_base_bdevs": 2, 00:20:45.269 "num_base_bdevs_discovered": 0, 00:20:45.269 "num_base_bdevs_operational": 2, 00:20:45.269 "base_bdevs_list": [ 00:20:45.269 { 00:20:45.269 "name": "BaseBdev1", 00:20:45.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.269 "is_configured": false, 00:20:45.269 "data_offset": 0, 00:20:45.269 "data_size": 0 00:20:45.269 }, 00:20:45.269 { 00:20:45.269 "name": "BaseBdev2", 00:20:45.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.269 "is_configured": false, 00:20:45.269 "data_offset": 0, 00:20:45.269 "data_size": 0 00:20:45.269 } 00:20:45.269 ] 00:20:45.269 }' 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.269 20:33:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.528 [2024-11-26 20:33:39.038691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:45.528 [2024-11-26 20:33:39.038795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.528 [2024-11-26 20:33:39.050671] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.528 [2024-11-26 20:33:39.050763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.528 [2024-11-26 20:33:39.050798] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:45.528 [2024-11-26 20:33:39.050835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.528 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.887 [2024-11-26 20:33:39.102419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.887 BaseBdev1 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.887 [ 00:20:45.887 { 00:20:45.887 "name": "BaseBdev1", 00:20:45.887 "aliases": [ 00:20:45.887 "a61e2123-a83f-4c87-b22b-fbe6392ca1d5" 00:20:45.887 ], 00:20:45.887 "product_name": "Malloc disk", 00:20:45.887 "block_size": 4096, 00:20:45.887 "num_blocks": 8192, 00:20:45.887 "uuid": "a61e2123-a83f-4c87-b22b-fbe6392ca1d5", 00:20:45.887 "assigned_rate_limits": { 00:20:45.887 "rw_ios_per_sec": 0, 00:20:45.887 "rw_mbytes_per_sec": 0, 00:20:45.887 "r_mbytes_per_sec": 0, 00:20:45.887 "w_mbytes_per_sec": 0 00:20:45.887 }, 00:20:45.887 "claimed": true, 00:20:45.887 "claim_type": "exclusive_write", 00:20:45.887 "zoned": false, 00:20:45.887 "supported_io_types": { 00:20:45.887 "read": true, 00:20:45.887 "write": true, 00:20:45.887 "unmap": true, 00:20:45.887 "flush": true, 00:20:45.887 "reset": true, 00:20:45.887 "nvme_admin": false, 00:20:45.887 "nvme_io": false, 00:20:45.887 "nvme_io_md": false, 00:20:45.887 "write_zeroes": true, 00:20:45.887 "zcopy": true, 00:20:45.887 "get_zone_info": false, 00:20:45.887 "zone_management": false, 00:20:45.887 "zone_append": false, 00:20:45.887 "compare": false, 00:20:45.887 "compare_and_write": false, 00:20:45.887 "abort": true, 00:20:45.887 "seek_hole": false, 00:20:45.887 "seek_data": false, 00:20:45.887 "copy": true, 00:20:45.887 "nvme_iov_md": false 00:20:45.887 }, 00:20:45.887 "memory_domains": [ 00:20:45.887 { 00:20:45.887 "dma_device_id": "system", 00:20:45.887 "dma_device_type": 1 00:20:45.887 }, 00:20:45.887 { 00:20:45.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.887 "dma_device_type": 2 00:20:45.887 } 00:20:45.887 ], 00:20:45.887 "driver_specific": {} 00:20:45.887 } 00:20:45.887 ] 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.887 "name": "Existed_Raid", 00:20:45.887 "uuid": "cb4d5984-6e2e-4260-ad89-0c5536167939", 00:20:45.887 "strip_size_kb": 0, 00:20:45.887 "state": "configuring", 00:20:45.887 "raid_level": "raid1", 00:20:45.887 "superblock": true, 00:20:45.887 "num_base_bdevs": 2, 00:20:45.887 "num_base_bdevs_discovered": 1, 00:20:45.887 "num_base_bdevs_operational": 2, 00:20:45.887 "base_bdevs_list": [ 00:20:45.887 { 00:20:45.887 "name": "BaseBdev1", 00:20:45.887 "uuid": "a61e2123-a83f-4c87-b22b-fbe6392ca1d5", 00:20:45.887 "is_configured": true, 00:20:45.887 "data_offset": 256, 00:20:45.887 "data_size": 7936 00:20:45.887 }, 00:20:45.887 { 00:20:45.887 "name": "BaseBdev2", 00:20:45.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.887 "is_configured": false, 00:20:45.887 "data_offset": 0, 00:20:45.887 "data_size": 0 00:20:45.887 } 00:20:45.887 ] 00:20:45.887 }' 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.887 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.158 [2024-11-26 20:33:39.557727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:46.158 [2024-11-26 20:33:39.557843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.158 [2024-11-26 20:33:39.569760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.158 [2024-11-26 20:33:39.571907] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:46.158 [2024-11-26 20:33:39.571957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.158 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.159 "name": "Existed_Raid", 00:20:46.159 "uuid": "165c80b5-639a-4938-bf4c-c1b0312b5e9d", 00:20:46.159 "strip_size_kb": 0, 00:20:46.159 "state": "configuring", 00:20:46.159 "raid_level": "raid1", 00:20:46.159 "superblock": true, 00:20:46.159 "num_base_bdevs": 2, 00:20:46.159 "num_base_bdevs_discovered": 1, 00:20:46.159 "num_base_bdevs_operational": 2, 00:20:46.159 "base_bdevs_list": [ 00:20:46.159 { 00:20:46.159 "name": "BaseBdev1", 00:20:46.159 "uuid": "a61e2123-a83f-4c87-b22b-fbe6392ca1d5", 00:20:46.159 "is_configured": true, 00:20:46.159 "data_offset": 256, 00:20:46.159 "data_size": 7936 00:20:46.159 }, 00:20:46.159 { 00:20:46.159 "name": "BaseBdev2", 00:20:46.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.159 "is_configured": false, 00:20:46.159 "data_offset": 0, 00:20:46.159 "data_size": 0 00:20:46.159 } 00:20:46.159 ] 00:20:46.159 }' 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.159 20:33:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.728 [2024-11-26 20:33:40.049906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.728 [2024-11-26 20:33:40.050203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:46.728 [2024-11-26 20:33:40.050218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:46.728 [2024-11-26 20:33:40.050500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:46.728 [2024-11-26 20:33:40.050699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:46.728 [2024-11-26 20:33:40.050716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:46.728 BaseBdev2 00:20:46.728 [2024-11-26 20:33:40.050876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.728 [ 00:20:46.728 { 00:20:46.728 "name": "BaseBdev2", 00:20:46.728 "aliases": [ 00:20:46.728 "6d3768e0-8adf-4690-b527-250adde3a58b" 00:20:46.728 ], 00:20:46.728 "product_name": "Malloc disk", 00:20:46.728 "block_size": 4096, 00:20:46.728 "num_blocks": 8192, 00:20:46.728 "uuid": "6d3768e0-8adf-4690-b527-250adde3a58b", 00:20:46.728 "assigned_rate_limits": { 00:20:46.728 "rw_ios_per_sec": 0, 00:20:46.728 "rw_mbytes_per_sec": 0, 00:20:46.728 "r_mbytes_per_sec": 0, 00:20:46.728 "w_mbytes_per_sec": 0 00:20:46.728 }, 00:20:46.728 "claimed": true, 00:20:46.728 "claim_type": "exclusive_write", 00:20:46.728 "zoned": false, 00:20:46.728 "supported_io_types": { 00:20:46.728 "read": true, 00:20:46.728 "write": true, 00:20:46.728 "unmap": true, 00:20:46.728 "flush": true, 00:20:46.728 "reset": true, 00:20:46.728 "nvme_admin": false, 00:20:46.728 "nvme_io": false, 00:20:46.728 "nvme_io_md": false, 00:20:46.728 "write_zeroes": true, 00:20:46.728 "zcopy": true, 00:20:46.728 "get_zone_info": false, 00:20:46.728 "zone_management": false, 00:20:46.728 "zone_append": false, 00:20:46.728 "compare": false, 00:20:46.728 "compare_and_write": false, 00:20:46.728 "abort": true, 00:20:46.728 "seek_hole": false, 00:20:46.728 "seek_data": false, 00:20:46.728 "copy": true, 00:20:46.728 "nvme_iov_md": false 00:20:46.728 }, 00:20:46.728 "memory_domains": [ 00:20:46.728 { 00:20:46.728 "dma_device_id": "system", 00:20:46.728 "dma_device_type": 1 00:20:46.728 }, 00:20:46.728 { 00:20:46.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.728 "dma_device_type": 2 00:20:46.728 } 00:20:46.728 ], 00:20:46.728 "driver_specific": {} 00:20:46.728 } 00:20:46.728 ] 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.728 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.728 "name": "Existed_Raid", 00:20:46.729 "uuid": "165c80b5-639a-4938-bf4c-c1b0312b5e9d", 00:20:46.729 "strip_size_kb": 0, 00:20:46.729 "state": "online", 00:20:46.729 "raid_level": "raid1", 00:20:46.729 "superblock": true, 00:20:46.729 "num_base_bdevs": 2, 00:20:46.729 "num_base_bdevs_discovered": 2, 00:20:46.729 "num_base_bdevs_operational": 2, 00:20:46.729 "base_bdevs_list": [ 00:20:46.729 { 00:20:46.729 "name": "BaseBdev1", 00:20:46.729 "uuid": "a61e2123-a83f-4c87-b22b-fbe6392ca1d5", 00:20:46.729 "is_configured": true, 00:20:46.729 "data_offset": 256, 00:20:46.729 "data_size": 7936 00:20:46.729 }, 00:20:46.729 { 00:20:46.729 "name": "BaseBdev2", 00:20:46.729 "uuid": "6d3768e0-8adf-4690-b527-250adde3a58b", 00:20:46.729 "is_configured": true, 00:20:46.729 "data_offset": 256, 00:20:46.729 "data_size": 7936 00:20:46.729 } 00:20:46.729 ] 00:20:46.729 }' 00:20:46.729 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.729 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.989 [2024-11-26 20:33:40.505551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.989 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:47.249 "name": "Existed_Raid", 00:20:47.249 "aliases": [ 00:20:47.249 "165c80b5-639a-4938-bf4c-c1b0312b5e9d" 00:20:47.249 ], 00:20:47.249 "product_name": "Raid Volume", 00:20:47.249 "block_size": 4096, 00:20:47.249 "num_blocks": 7936, 00:20:47.249 "uuid": "165c80b5-639a-4938-bf4c-c1b0312b5e9d", 00:20:47.249 "assigned_rate_limits": { 00:20:47.249 "rw_ios_per_sec": 0, 00:20:47.249 "rw_mbytes_per_sec": 0, 00:20:47.249 "r_mbytes_per_sec": 0, 00:20:47.249 "w_mbytes_per_sec": 0 00:20:47.249 }, 00:20:47.249 "claimed": false, 00:20:47.249 "zoned": false, 00:20:47.249 "supported_io_types": { 00:20:47.249 "read": true, 00:20:47.249 "write": true, 00:20:47.249 "unmap": false, 00:20:47.249 "flush": false, 00:20:47.249 "reset": true, 00:20:47.249 "nvme_admin": false, 00:20:47.249 "nvme_io": false, 00:20:47.249 "nvme_io_md": false, 00:20:47.249 "write_zeroes": true, 00:20:47.249 "zcopy": false, 00:20:47.249 "get_zone_info": false, 00:20:47.249 "zone_management": false, 00:20:47.249 "zone_append": false, 00:20:47.249 "compare": false, 00:20:47.249 "compare_and_write": false, 00:20:47.249 "abort": false, 00:20:47.249 "seek_hole": false, 00:20:47.249 "seek_data": false, 00:20:47.249 "copy": false, 00:20:47.249 "nvme_iov_md": false 00:20:47.249 }, 00:20:47.249 "memory_domains": [ 00:20:47.249 { 00:20:47.249 "dma_device_id": "system", 00:20:47.249 "dma_device_type": 1 00:20:47.249 }, 00:20:47.249 { 00:20:47.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.249 "dma_device_type": 2 00:20:47.249 }, 00:20:47.249 { 00:20:47.249 "dma_device_id": "system", 00:20:47.249 "dma_device_type": 1 00:20:47.249 }, 00:20:47.249 { 00:20:47.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.249 "dma_device_type": 2 00:20:47.249 } 00:20:47.249 ], 00:20:47.249 "driver_specific": { 00:20:47.249 "raid": { 00:20:47.249 "uuid": "165c80b5-639a-4938-bf4c-c1b0312b5e9d", 00:20:47.249 "strip_size_kb": 0, 00:20:47.249 "state": "online", 00:20:47.249 "raid_level": "raid1", 00:20:47.249 "superblock": true, 00:20:47.249 "num_base_bdevs": 2, 00:20:47.249 "num_base_bdevs_discovered": 2, 00:20:47.249 "num_base_bdevs_operational": 2, 00:20:47.249 "base_bdevs_list": [ 00:20:47.249 { 00:20:47.249 "name": "BaseBdev1", 00:20:47.249 "uuid": "a61e2123-a83f-4c87-b22b-fbe6392ca1d5", 00:20:47.249 "is_configured": true, 00:20:47.249 "data_offset": 256, 00:20:47.249 "data_size": 7936 00:20:47.249 }, 00:20:47.249 { 00:20:47.249 "name": "BaseBdev2", 00:20:47.249 "uuid": "6d3768e0-8adf-4690-b527-250adde3a58b", 00:20:47.249 "is_configured": true, 00:20:47.249 "data_offset": 256, 00:20:47.249 "data_size": 7936 00:20:47.249 } 00:20:47.249 ] 00:20:47.249 } 00:20:47.249 } 00:20:47.249 }' 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:47.249 BaseBdev2' 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:47.249 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.250 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.250 [2024-11-26 20:33:40.748934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.509 "name": "Existed_Raid", 00:20:47.509 "uuid": "165c80b5-639a-4938-bf4c-c1b0312b5e9d", 00:20:47.509 "strip_size_kb": 0, 00:20:47.509 "state": "online", 00:20:47.509 "raid_level": "raid1", 00:20:47.509 "superblock": true, 00:20:47.509 "num_base_bdevs": 2, 00:20:47.509 "num_base_bdevs_discovered": 1, 00:20:47.509 "num_base_bdevs_operational": 1, 00:20:47.509 "base_bdevs_list": [ 00:20:47.509 { 00:20:47.509 "name": null, 00:20:47.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.509 "is_configured": false, 00:20:47.509 "data_offset": 0, 00:20:47.509 "data_size": 7936 00:20:47.509 }, 00:20:47.509 { 00:20:47.509 "name": "BaseBdev2", 00:20:47.509 "uuid": "6d3768e0-8adf-4690-b527-250adde3a58b", 00:20:47.509 "is_configured": true, 00:20:47.509 "data_offset": 256, 00:20:47.509 "data_size": 7936 00:20:47.509 } 00:20:47.509 ] 00:20:47.509 }' 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.509 20:33:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.767 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:47.767 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.026 [2024-11-26 20:33:41.354855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:48.026 [2024-11-26 20:33:41.355020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.026 [2024-11-26 20:33:41.458274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.026 [2024-11-26 20:33:41.458393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.026 [2024-11-26 20:33:41.458448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86408 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86408 ']' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86408 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86408 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86408' 00:20:48.026 killing process with pid 86408 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86408 00:20:48.026 20:33:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86408 00:20:48.026 [2024-11-26 20:33:41.554513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.026 [2024-11-26 20:33:41.574737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.405 20:33:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:49.405 00:20:49.405 real 0m5.209s 00:20:49.405 user 0m7.405s 00:20:49.406 sys 0m0.821s 00:20:49.406 20:33:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.406 20:33:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.406 ************************************ 00:20:49.406 END TEST raid_state_function_test_sb_4k 00:20:49.406 ************************************ 00:20:49.406 20:33:42 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:49.406 20:33:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:49.406 20:33:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.406 20:33:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.406 ************************************ 00:20:49.406 START TEST raid_superblock_test_4k 00:20:49.406 ************************************ 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86661 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86661 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86661 ']' 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.406 20:33:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.664 [2024-11-26 20:33:43.034377] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:20:49.664 [2024-11-26 20:33:43.034515] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86661 ] 00:20:49.664 [2024-11-26 20:33:43.208460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.924 [2024-11-26 20:33:43.340518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.183 [2024-11-26 20:33:43.571253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.183 [2024-11-26 20:33:43.571374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.500 20:33:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.500 malloc1 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.500 [2024-11-26 20:33:44.024013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:50.500 [2024-11-26 20:33:44.024101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.500 [2024-11-26 20:33:44.024131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:50.500 [2024-11-26 20:33:44.024142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.500 [2024-11-26 20:33:44.026753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.500 [2024-11-26 20:33:44.026800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:50.500 pt1 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.500 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.759 malloc2 00:20:50.759 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.759 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:50.759 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.759 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.760 [2024-11-26 20:33:44.085608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:50.760 [2024-11-26 20:33:44.085738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.760 [2024-11-26 20:33:44.085791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:50.760 [2024-11-26 20:33:44.085826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.760 [2024-11-26 20:33:44.088242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.760 [2024-11-26 20:33:44.088346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:50.760 pt2 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.760 [2024-11-26 20:33:44.097657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:50.760 [2024-11-26 20:33:44.099777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:50.760 [2024-11-26 20:33:44.100068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:50.760 [2024-11-26 20:33:44.100130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:50.760 [2024-11-26 20:33:44.100492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:50.760 [2024-11-26 20:33:44.100744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:50.760 [2024-11-26 20:33:44.100806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:50.760 [2024-11-26 20:33:44.101078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.760 "name": "raid_bdev1", 00:20:50.760 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:50.760 "strip_size_kb": 0, 00:20:50.760 "state": "online", 00:20:50.760 "raid_level": "raid1", 00:20:50.760 "superblock": true, 00:20:50.760 "num_base_bdevs": 2, 00:20:50.760 "num_base_bdevs_discovered": 2, 00:20:50.760 "num_base_bdevs_operational": 2, 00:20:50.760 "base_bdevs_list": [ 00:20:50.760 { 00:20:50.760 "name": "pt1", 00:20:50.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:50.760 "is_configured": true, 00:20:50.760 "data_offset": 256, 00:20:50.760 "data_size": 7936 00:20:50.760 }, 00:20:50.760 { 00:20:50.760 "name": "pt2", 00:20:50.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:50.760 "is_configured": true, 00:20:50.760 "data_offset": 256, 00:20:50.760 "data_size": 7936 00:20:50.760 } 00:20:50.760 ] 00:20:50.760 }' 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.760 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.020 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.020 [2024-11-26 20:33:44.553228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.280 "name": "raid_bdev1", 00:20:51.280 "aliases": [ 00:20:51.280 "5b993779-bba0-4884-90c7-d843d5e14582" 00:20:51.280 ], 00:20:51.280 "product_name": "Raid Volume", 00:20:51.280 "block_size": 4096, 00:20:51.280 "num_blocks": 7936, 00:20:51.280 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:51.280 "assigned_rate_limits": { 00:20:51.280 "rw_ios_per_sec": 0, 00:20:51.280 "rw_mbytes_per_sec": 0, 00:20:51.280 "r_mbytes_per_sec": 0, 00:20:51.280 "w_mbytes_per_sec": 0 00:20:51.280 }, 00:20:51.280 "claimed": false, 00:20:51.280 "zoned": false, 00:20:51.280 "supported_io_types": { 00:20:51.280 "read": true, 00:20:51.280 "write": true, 00:20:51.280 "unmap": false, 00:20:51.280 "flush": false, 00:20:51.280 "reset": true, 00:20:51.280 "nvme_admin": false, 00:20:51.280 "nvme_io": false, 00:20:51.280 "nvme_io_md": false, 00:20:51.280 "write_zeroes": true, 00:20:51.280 "zcopy": false, 00:20:51.280 "get_zone_info": false, 00:20:51.280 "zone_management": false, 00:20:51.280 "zone_append": false, 00:20:51.280 "compare": false, 00:20:51.280 "compare_and_write": false, 00:20:51.280 "abort": false, 00:20:51.280 "seek_hole": false, 00:20:51.280 "seek_data": false, 00:20:51.280 "copy": false, 00:20:51.280 "nvme_iov_md": false 00:20:51.280 }, 00:20:51.280 "memory_domains": [ 00:20:51.280 { 00:20:51.280 "dma_device_id": "system", 00:20:51.280 "dma_device_type": 1 00:20:51.280 }, 00:20:51.280 { 00:20:51.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.280 "dma_device_type": 2 00:20:51.280 }, 00:20:51.280 { 00:20:51.280 "dma_device_id": "system", 00:20:51.280 "dma_device_type": 1 00:20:51.280 }, 00:20:51.280 { 00:20:51.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.280 "dma_device_type": 2 00:20:51.280 } 00:20:51.280 ], 00:20:51.280 "driver_specific": { 00:20:51.280 "raid": { 00:20:51.280 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:51.280 "strip_size_kb": 0, 00:20:51.280 "state": "online", 00:20:51.280 "raid_level": "raid1", 00:20:51.280 "superblock": true, 00:20:51.280 "num_base_bdevs": 2, 00:20:51.280 "num_base_bdevs_discovered": 2, 00:20:51.280 "num_base_bdevs_operational": 2, 00:20:51.280 "base_bdevs_list": [ 00:20:51.280 { 00:20:51.280 "name": "pt1", 00:20:51.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.280 "is_configured": true, 00:20:51.280 "data_offset": 256, 00:20:51.280 "data_size": 7936 00:20:51.280 }, 00:20:51.280 { 00:20:51.280 "name": "pt2", 00:20:51.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.280 "is_configured": true, 00:20:51.280 "data_offset": 256, 00:20:51.280 "data_size": 7936 00:20:51.280 } 00:20:51.280 ] 00:20:51.280 } 00:20:51.280 } 00:20:51.280 }' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:51.280 pt2' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:51.280 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.280 [2024-11-26 20:33:44.816758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b993779-bba0-4884-90c7-d843d5e14582 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5b993779-bba0-4884-90c7-d843d5e14582 ']' 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.541 [2024-11-26 20:33:44.864349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.541 [2024-11-26 20:33:44.864393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.541 [2024-11-26 20:33:44.864490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.541 [2024-11-26 20:33:44.864553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.541 [2024-11-26 20:33:44.864564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:51.541 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 [2024-11-26 20:33:45.008141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:51.542 [2024-11-26 20:33:45.010272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:51.542 [2024-11-26 20:33:45.010404] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:51.542 [2024-11-26 20:33:45.010534] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:51.542 [2024-11-26 20:33:45.010624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.542 [2024-11-26 20:33:45.010639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:51.542 request: 00:20:51.542 { 00:20:51.542 "name": "raid_bdev1", 00:20:51.542 "raid_level": "raid1", 00:20:51.542 "base_bdevs": [ 00:20:51.542 "malloc1", 00:20:51.542 "malloc2" 00:20:51.542 ], 00:20:51.542 "superblock": false, 00:20:51.542 "method": "bdev_raid_create", 00:20:51.542 "req_id": 1 00:20:51.542 } 00:20:51.542 Got JSON-RPC error response 00:20:51.542 response: 00:20:51.542 { 00:20:51.542 "code": -17, 00:20:51.542 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:51.542 } 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.542 [2024-11-26 20:33:45.064031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:51.542 [2024-11-26 20:33:45.064176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.542 [2024-11-26 20:33:45.064248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:51.542 [2024-11-26 20:33:45.064308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.542 [2024-11-26 20:33:45.066821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.542 [2024-11-26 20:33:45.066907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:51.542 [2024-11-26 20:33:45.067040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:51.542 [2024-11-26 20:33:45.067139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:51.542 pt1 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.542 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.543 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.543 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.543 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.543 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.801 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.801 "name": "raid_bdev1", 00:20:51.801 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:51.801 "strip_size_kb": 0, 00:20:51.801 "state": "configuring", 00:20:51.801 "raid_level": "raid1", 00:20:51.801 "superblock": true, 00:20:51.801 "num_base_bdevs": 2, 00:20:51.801 "num_base_bdevs_discovered": 1, 00:20:51.801 "num_base_bdevs_operational": 2, 00:20:51.801 "base_bdevs_list": [ 00:20:51.801 { 00:20:51.801 "name": "pt1", 00:20:51.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.801 "is_configured": true, 00:20:51.801 "data_offset": 256, 00:20:51.801 "data_size": 7936 00:20:51.801 }, 00:20:51.801 { 00:20:51.801 "name": null, 00:20:51.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.801 "is_configured": false, 00:20:51.801 "data_offset": 256, 00:20:51.801 "data_size": 7936 00:20:51.801 } 00:20:51.801 ] 00:20:51.801 }' 00:20:51.801 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.801 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.061 [2024-11-26 20:33:45.535254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:52.061 [2024-11-26 20:33:45.535356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.061 [2024-11-26 20:33:45.535382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:52.061 [2024-11-26 20:33:45.535394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.061 [2024-11-26 20:33:45.535914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.061 [2024-11-26 20:33:45.535940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:52.061 [2024-11-26 20:33:45.536033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:52.061 [2024-11-26 20:33:45.536063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:52.061 [2024-11-26 20:33:45.536195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:52.061 [2024-11-26 20:33:45.536207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:52.061 [2024-11-26 20:33:45.536553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:52.061 [2024-11-26 20:33:45.536765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:52.061 [2024-11-26 20:33:45.536779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:52.061 [2024-11-26 20:33:45.536941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.061 pt2 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.061 "name": "raid_bdev1", 00:20:52.061 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:52.061 "strip_size_kb": 0, 00:20:52.061 "state": "online", 00:20:52.061 "raid_level": "raid1", 00:20:52.061 "superblock": true, 00:20:52.061 "num_base_bdevs": 2, 00:20:52.061 "num_base_bdevs_discovered": 2, 00:20:52.061 "num_base_bdevs_operational": 2, 00:20:52.061 "base_bdevs_list": [ 00:20:52.061 { 00:20:52.061 "name": "pt1", 00:20:52.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:52.061 "is_configured": true, 00:20:52.061 "data_offset": 256, 00:20:52.061 "data_size": 7936 00:20:52.061 }, 00:20:52.061 { 00:20:52.061 "name": "pt2", 00:20:52.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.061 "is_configured": true, 00:20:52.061 "data_offset": 256, 00:20:52.061 "data_size": 7936 00:20:52.061 } 00:20:52.061 ] 00:20:52.061 }' 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.061 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.631 [2024-11-26 20:33:45.942789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.631 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.631 "name": "raid_bdev1", 00:20:52.631 "aliases": [ 00:20:52.631 "5b993779-bba0-4884-90c7-d843d5e14582" 00:20:52.631 ], 00:20:52.631 "product_name": "Raid Volume", 00:20:52.631 "block_size": 4096, 00:20:52.631 "num_blocks": 7936, 00:20:52.631 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:52.631 "assigned_rate_limits": { 00:20:52.631 "rw_ios_per_sec": 0, 00:20:52.631 "rw_mbytes_per_sec": 0, 00:20:52.631 "r_mbytes_per_sec": 0, 00:20:52.631 "w_mbytes_per_sec": 0 00:20:52.631 }, 00:20:52.631 "claimed": false, 00:20:52.631 "zoned": false, 00:20:52.631 "supported_io_types": { 00:20:52.631 "read": true, 00:20:52.631 "write": true, 00:20:52.631 "unmap": false, 00:20:52.631 "flush": false, 00:20:52.631 "reset": true, 00:20:52.631 "nvme_admin": false, 00:20:52.631 "nvme_io": false, 00:20:52.631 "nvme_io_md": false, 00:20:52.631 "write_zeroes": true, 00:20:52.631 "zcopy": false, 00:20:52.631 "get_zone_info": false, 00:20:52.631 "zone_management": false, 00:20:52.631 "zone_append": false, 00:20:52.631 "compare": false, 00:20:52.631 "compare_and_write": false, 00:20:52.631 "abort": false, 00:20:52.631 "seek_hole": false, 00:20:52.631 "seek_data": false, 00:20:52.631 "copy": false, 00:20:52.631 "nvme_iov_md": false 00:20:52.632 }, 00:20:52.632 "memory_domains": [ 00:20:52.632 { 00:20:52.632 "dma_device_id": "system", 00:20:52.632 "dma_device_type": 1 00:20:52.632 }, 00:20:52.632 { 00:20:52.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.632 "dma_device_type": 2 00:20:52.632 }, 00:20:52.632 { 00:20:52.632 "dma_device_id": "system", 00:20:52.632 "dma_device_type": 1 00:20:52.632 }, 00:20:52.632 { 00:20:52.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.632 "dma_device_type": 2 00:20:52.632 } 00:20:52.632 ], 00:20:52.632 "driver_specific": { 00:20:52.632 "raid": { 00:20:52.632 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:52.632 "strip_size_kb": 0, 00:20:52.632 "state": "online", 00:20:52.632 "raid_level": "raid1", 00:20:52.632 "superblock": true, 00:20:52.632 "num_base_bdevs": 2, 00:20:52.632 "num_base_bdevs_discovered": 2, 00:20:52.632 "num_base_bdevs_operational": 2, 00:20:52.632 "base_bdevs_list": [ 00:20:52.632 { 00:20:52.632 "name": "pt1", 00:20:52.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:52.632 "is_configured": true, 00:20:52.632 "data_offset": 256, 00:20:52.632 "data_size": 7936 00:20:52.632 }, 00:20:52.632 { 00:20:52.632 "name": "pt2", 00:20:52.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.632 "is_configured": true, 00:20:52.632 "data_offset": 256, 00:20:52.632 "data_size": 7936 00:20:52.632 } 00:20:52.632 ] 00:20:52.632 } 00:20:52.632 } 00:20:52.632 }' 00:20:52.632 20:33:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:52.632 pt2' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:52.632 [2024-11-26 20:33:46.170460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.632 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5b993779-bba0-4884-90c7-d843d5e14582 '!=' 5b993779-bba0-4884-90c7-d843d5e14582 ']' 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.890 [2024-11-26 20:33:46.214141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.890 "name": "raid_bdev1", 00:20:52.890 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:52.890 "strip_size_kb": 0, 00:20:52.890 "state": "online", 00:20:52.890 "raid_level": "raid1", 00:20:52.890 "superblock": true, 00:20:52.890 "num_base_bdevs": 2, 00:20:52.890 "num_base_bdevs_discovered": 1, 00:20:52.890 "num_base_bdevs_operational": 1, 00:20:52.890 "base_bdevs_list": [ 00:20:52.890 { 00:20:52.890 "name": null, 00:20:52.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.890 "is_configured": false, 00:20:52.890 "data_offset": 0, 00:20:52.890 "data_size": 7936 00:20:52.890 }, 00:20:52.890 { 00:20:52.890 "name": "pt2", 00:20:52.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.890 "is_configured": true, 00:20:52.890 "data_offset": 256, 00:20:52.890 "data_size": 7936 00:20:52.890 } 00:20:52.890 ] 00:20:52.890 }' 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.890 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 [2024-11-26 20:33:46.597469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.148 [2024-11-26 20:33:46.597552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.148 [2024-11-26 20:33:46.597686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.148 [2024-11-26 20:33:46.597769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.148 [2024-11-26 20:33:46.597890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 [2024-11-26 20:33:46.645408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:53.148 [2024-11-26 20:33:46.645531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.148 [2024-11-26 20:33:46.645583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:53.148 [2024-11-26 20:33:46.645622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.148 [2024-11-26 20:33:46.648212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.148 [2024-11-26 20:33:46.648336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:53.148 [2024-11-26 20:33:46.648480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:53.148 [2024-11-26 20:33:46.648593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.148 [2024-11-26 20:33:46.648774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:53.148 [2024-11-26 20:33:46.648840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:53.148 [2024-11-26 20:33:46.649177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:53.148 [2024-11-26 20:33:46.649443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:53.148 [2024-11-26 20:33:46.649503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:53.148 [2024-11-26 20:33:46.649775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.148 pt2 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.148 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.149 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.149 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.149 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.149 "name": "raid_bdev1", 00:20:53.149 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:53.149 "strip_size_kb": 0, 00:20:53.149 "state": "online", 00:20:53.149 "raid_level": "raid1", 00:20:53.149 "superblock": true, 00:20:53.149 "num_base_bdevs": 2, 00:20:53.149 "num_base_bdevs_discovered": 1, 00:20:53.149 "num_base_bdevs_operational": 1, 00:20:53.149 "base_bdevs_list": [ 00:20:53.149 { 00:20:53.149 "name": null, 00:20:53.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.149 "is_configured": false, 00:20:53.149 "data_offset": 256, 00:20:53.149 "data_size": 7936 00:20:53.149 }, 00:20:53.149 { 00:20:53.149 "name": "pt2", 00:20:53.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.149 "is_configured": true, 00:20:53.149 "data_offset": 256, 00:20:53.149 "data_size": 7936 00:20:53.149 } 00:20:53.149 ] 00:20:53.149 }' 00:20:53.149 20:33:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.149 20:33:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.716 [2024-11-26 20:33:47.053094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.716 [2024-11-26 20:33:47.053134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.716 [2024-11-26 20:33:47.053232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.716 [2024-11-26 20:33:47.053324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.716 [2024-11-26 20:33:47.053338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.716 [2024-11-26 20:33:47.109008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:53.716 [2024-11-26 20:33:47.109078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.716 [2024-11-26 20:33:47.109108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:53.716 [2024-11-26 20:33:47.109121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.716 [2024-11-26 20:33:47.111786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.716 [2024-11-26 20:33:47.111828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:53.716 [2024-11-26 20:33:47.111925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:53.716 [2024-11-26 20:33:47.111984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.716 [2024-11-26 20:33:47.112167] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:53.716 [2024-11-26 20:33:47.112183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.716 [2024-11-26 20:33:47.112201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:53.716 [2024-11-26 20:33:47.112287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.716 [2024-11-26 20:33:47.112375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:53.716 [2024-11-26 20:33:47.112384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:53.716 [2024-11-26 20:33:47.112676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:53.716 pt1 00:20:53.716 [2024-11-26 20:33:47.112918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:53.716 [2024-11-26 20:33:47.112940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:53.716 [2024-11-26 20:33:47.113197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.716 "name": "raid_bdev1", 00:20:53.716 "uuid": "5b993779-bba0-4884-90c7-d843d5e14582", 00:20:53.716 "strip_size_kb": 0, 00:20:53.716 "state": "online", 00:20:53.716 "raid_level": "raid1", 00:20:53.716 "superblock": true, 00:20:53.716 "num_base_bdevs": 2, 00:20:53.716 "num_base_bdevs_discovered": 1, 00:20:53.716 "num_base_bdevs_operational": 1, 00:20:53.716 "base_bdevs_list": [ 00:20:53.716 { 00:20:53.716 "name": null, 00:20:53.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.716 "is_configured": false, 00:20:53.716 "data_offset": 256, 00:20:53.716 "data_size": 7936 00:20:53.716 }, 00:20:53.716 { 00:20:53.716 "name": "pt2", 00:20:53.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.716 "is_configured": true, 00:20:53.716 "data_offset": 256, 00:20:53.716 "data_size": 7936 00:20:53.716 } 00:20:53.716 ] 00:20:53.716 }' 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.716 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:54.286 [2024-11-26 20:33:47.572797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5b993779-bba0-4884-90c7-d843d5e14582 '!=' 5b993779-bba0-4884-90c7-d843d5e14582 ']' 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86661 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86661 ']' 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86661 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86661 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.286 killing process with pid 86661 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86661' 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86661 00:20:54.286 [2024-11-26 20:33:47.654599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.286 [2024-11-26 20:33:47.654700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.286 [2024-11-26 20:33:47.654751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:54.286 [2024-11-26 20:33:47.654767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:54.286 20:33:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86661 00:20:54.545 [2024-11-26 20:33:47.878827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.552 20:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:55.552 00:20:55.552 real 0m6.100s 00:20:55.552 user 0m9.173s 00:20:55.552 sys 0m1.040s 00:20:55.552 20:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.552 ************************************ 00:20:55.552 END TEST raid_superblock_test_4k 00:20:55.552 ************************************ 00:20:55.552 20:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.552 20:33:49 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:55.552 20:33:49 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:55.552 20:33:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:55.552 20:33:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.552 20:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:55.552 ************************************ 00:20:55.552 START TEST raid_rebuild_test_sb_4k 00:20:55.552 ************************************ 00:20:55.552 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:55.552 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:55.552 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:55.552 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:55.552 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:55.552 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:55.810 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86985 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86985 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86985 ']' 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.811 20:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.811 [2024-11-26 20:33:49.200022] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:20:55.811 [2024-11-26 20:33:49.200210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.811 Zero copy mechanism will not be used. 00:20:55.811 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86985 ] 00:20:56.069 [2024-11-26 20:33:49.372682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.069 [2024-11-26 20:33:49.488521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.328 [2024-11-26 20:33:49.693509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.328 [2024-11-26 20:33:49.693626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.588 BaseBdev1_malloc 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.588 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.588 [2024-11-26 20:33:50.140278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:56.588 [2024-11-26 20:33:50.140393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.588 [2024-11-26 20:33:50.140421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:56.588 [2024-11-26 20:33:50.140433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.846 [2024-11-26 20:33:50.142617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.846 [2024-11-26 20:33:50.142659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:56.846 BaseBdev1 00:20:56.846 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.846 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 BaseBdev2_malloc 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 [2024-11-26 20:33:50.192651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:56.847 [2024-11-26 20:33:50.192723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.847 [2024-11-26 20:33:50.192746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:56.847 [2024-11-26 20:33:50.192757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.847 [2024-11-26 20:33:50.194952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.847 [2024-11-26 20:33:50.194990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:56.847 BaseBdev2 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 spare_malloc 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 spare_delay 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 [2024-11-26 20:33:50.267059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:56.847 [2024-11-26 20:33:50.267114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.847 [2024-11-26 20:33:50.267133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:56.847 [2024-11-26 20:33:50.267143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.847 [2024-11-26 20:33:50.269261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.847 [2024-11-26 20:33:50.269294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:56.847 spare 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 [2024-11-26 20:33:50.279102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:56.847 [2024-11-26 20:33:50.280862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.847 [2024-11-26 20:33:50.281083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:56.847 [2024-11-26 20:33:50.281100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:56.847 [2024-11-26 20:33:50.281340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:56.847 [2024-11-26 20:33:50.281501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:56.847 [2024-11-26 20:33:50.281533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:56.847 [2024-11-26 20:33:50.281679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.847 "name": "raid_bdev1", 00:20:56.847 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:20:56.847 "strip_size_kb": 0, 00:20:56.847 "state": "online", 00:20:56.847 "raid_level": "raid1", 00:20:56.847 "superblock": true, 00:20:56.847 "num_base_bdevs": 2, 00:20:56.847 "num_base_bdevs_discovered": 2, 00:20:56.847 "num_base_bdevs_operational": 2, 00:20:56.847 "base_bdevs_list": [ 00:20:56.847 { 00:20:56.847 "name": "BaseBdev1", 00:20:56.847 "uuid": "4b338348-8dad-5595-aa74-84b52e3903d4", 00:20:56.847 "is_configured": true, 00:20:56.847 "data_offset": 256, 00:20:56.847 "data_size": 7936 00:20:56.847 }, 00:20:56.847 { 00:20:56.847 "name": "BaseBdev2", 00:20:56.847 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:20:56.847 "is_configured": true, 00:20:56.847 "data_offset": 256, 00:20:56.847 "data_size": 7936 00:20:56.847 } 00:20:56.847 ] 00:20:56.847 }' 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.847 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.414 [2024-11-26 20:33:50.738623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.414 20:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:57.671 [2024-11-26 20:33:50.997964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:57.671 /dev/nbd0 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.671 1+0 records in 00:20:57.671 1+0 records out 00:20:57.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351357 s, 11.7 MB/s 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:57.671 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:58.606 7936+0 records in 00:20:58.606 7936+0 records out 00:20:58.606 32505856 bytes (33 MB, 31 MiB) copied, 0.738141 s, 44.0 MB/s 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.606 20:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.606 [2024-11-26 20:33:52.048512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.606 [2024-11-26 20:33:52.056637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.606 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.607 "name": "raid_bdev1", 00:20:58.607 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:20:58.607 "strip_size_kb": 0, 00:20:58.607 "state": "online", 00:20:58.607 "raid_level": "raid1", 00:20:58.607 "superblock": true, 00:20:58.607 "num_base_bdevs": 2, 00:20:58.607 "num_base_bdevs_discovered": 1, 00:20:58.607 "num_base_bdevs_operational": 1, 00:20:58.607 "base_bdevs_list": [ 00:20:58.607 { 00:20:58.607 "name": null, 00:20:58.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.607 "is_configured": false, 00:20:58.607 "data_offset": 0, 00:20:58.607 "data_size": 7936 00:20:58.607 }, 00:20:58.607 { 00:20:58.607 "name": "BaseBdev2", 00:20:58.607 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:20:58.607 "is_configured": true, 00:20:58.607 "data_offset": 256, 00:20:58.607 "data_size": 7936 00:20:58.607 } 00:20:58.607 ] 00:20:58.607 }' 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.607 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.173 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:59.173 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.173 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.173 [2024-11-26 20:33:52.503939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:59.173 [2024-11-26 20:33:52.523595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:59.173 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.173 20:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:59.173 [2024-11-26 20:33:52.525679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.108 "name": "raid_bdev1", 00:21:00.108 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:00.108 "strip_size_kb": 0, 00:21:00.108 "state": "online", 00:21:00.108 "raid_level": "raid1", 00:21:00.108 "superblock": true, 00:21:00.108 "num_base_bdevs": 2, 00:21:00.108 "num_base_bdevs_discovered": 2, 00:21:00.108 "num_base_bdevs_operational": 2, 00:21:00.108 "process": { 00:21:00.108 "type": "rebuild", 00:21:00.108 "target": "spare", 00:21:00.108 "progress": { 00:21:00.108 "blocks": 2560, 00:21:00.108 "percent": 32 00:21:00.108 } 00:21:00.108 }, 00:21:00.108 "base_bdevs_list": [ 00:21:00.108 { 00:21:00.108 "name": "spare", 00:21:00.108 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:00.108 "is_configured": true, 00:21:00.108 "data_offset": 256, 00:21:00.108 "data_size": 7936 00:21:00.108 }, 00:21:00.108 { 00:21:00.108 "name": "BaseBdev2", 00:21:00.108 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:00.108 "is_configured": true, 00:21:00.108 "data_offset": 256, 00:21:00.108 "data_size": 7936 00:21:00.108 } 00:21:00.108 ] 00:21:00.108 }' 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.108 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.368 [2024-11-26 20:33:53.689367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.368 [2024-11-26 20:33:53.731825] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:00.368 [2024-11-26 20:33:53.731952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.368 [2024-11-26 20:33:53.731971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.368 [2024-11-26 20:33:53.731981] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.368 "name": "raid_bdev1", 00:21:00.368 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:00.368 "strip_size_kb": 0, 00:21:00.368 "state": "online", 00:21:00.368 "raid_level": "raid1", 00:21:00.368 "superblock": true, 00:21:00.368 "num_base_bdevs": 2, 00:21:00.368 "num_base_bdevs_discovered": 1, 00:21:00.368 "num_base_bdevs_operational": 1, 00:21:00.368 "base_bdevs_list": [ 00:21:00.368 { 00:21:00.368 "name": null, 00:21:00.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.368 "is_configured": false, 00:21:00.368 "data_offset": 0, 00:21:00.368 "data_size": 7936 00:21:00.368 }, 00:21:00.368 { 00:21:00.368 "name": "BaseBdev2", 00:21:00.368 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:00.368 "is_configured": true, 00:21:00.368 "data_offset": 256, 00:21:00.368 "data_size": 7936 00:21:00.368 } 00:21:00.368 ] 00:21:00.368 }' 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.368 20:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.938 "name": "raid_bdev1", 00:21:00.938 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:00.938 "strip_size_kb": 0, 00:21:00.938 "state": "online", 00:21:00.938 "raid_level": "raid1", 00:21:00.938 "superblock": true, 00:21:00.938 "num_base_bdevs": 2, 00:21:00.938 "num_base_bdevs_discovered": 1, 00:21:00.938 "num_base_bdevs_operational": 1, 00:21:00.938 "base_bdevs_list": [ 00:21:00.938 { 00:21:00.938 "name": null, 00:21:00.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.938 "is_configured": false, 00:21:00.938 "data_offset": 0, 00:21:00.938 "data_size": 7936 00:21:00.938 }, 00:21:00.938 { 00:21:00.938 "name": "BaseBdev2", 00:21:00.938 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:00.938 "is_configured": true, 00:21:00.938 "data_offset": 256, 00:21:00.938 "data_size": 7936 00:21:00.938 } 00:21:00.938 ] 00:21:00.938 }' 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.938 [2024-11-26 20:33:54.310326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:00.938 [2024-11-26 20:33:54.328889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.938 20:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:00.938 [2024-11-26 20:33:54.331106] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.873 "name": "raid_bdev1", 00:21:01.873 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:01.873 "strip_size_kb": 0, 00:21:01.873 "state": "online", 00:21:01.873 "raid_level": "raid1", 00:21:01.873 "superblock": true, 00:21:01.873 "num_base_bdevs": 2, 00:21:01.873 "num_base_bdevs_discovered": 2, 00:21:01.873 "num_base_bdevs_operational": 2, 00:21:01.873 "process": { 00:21:01.873 "type": "rebuild", 00:21:01.873 "target": "spare", 00:21:01.873 "progress": { 00:21:01.873 "blocks": 2560, 00:21:01.873 "percent": 32 00:21:01.873 } 00:21:01.873 }, 00:21:01.873 "base_bdevs_list": [ 00:21:01.873 { 00:21:01.873 "name": "spare", 00:21:01.873 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:01.873 "is_configured": true, 00:21:01.873 "data_offset": 256, 00:21:01.873 "data_size": 7936 00:21:01.873 }, 00:21:01.873 { 00:21:01.873 "name": "BaseBdev2", 00:21:01.873 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:01.873 "is_configured": true, 00:21:01.873 "data_offset": 256, 00:21:01.873 "data_size": 7936 00:21:01.873 } 00:21:01.873 ] 00:21:01.873 }' 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.873 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:02.134 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=708 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.134 "name": "raid_bdev1", 00:21:02.134 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:02.134 "strip_size_kb": 0, 00:21:02.134 "state": "online", 00:21:02.134 "raid_level": "raid1", 00:21:02.134 "superblock": true, 00:21:02.134 "num_base_bdevs": 2, 00:21:02.134 "num_base_bdevs_discovered": 2, 00:21:02.134 "num_base_bdevs_operational": 2, 00:21:02.134 "process": { 00:21:02.134 "type": "rebuild", 00:21:02.134 "target": "spare", 00:21:02.134 "progress": { 00:21:02.134 "blocks": 2816, 00:21:02.134 "percent": 35 00:21:02.134 } 00:21:02.134 }, 00:21:02.134 "base_bdevs_list": [ 00:21:02.134 { 00:21:02.134 "name": "spare", 00:21:02.134 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:02.134 "is_configured": true, 00:21:02.134 "data_offset": 256, 00:21:02.134 "data_size": 7936 00:21:02.134 }, 00:21:02.134 { 00:21:02.134 "name": "BaseBdev2", 00:21:02.134 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:02.134 "is_configured": true, 00:21:02.134 "data_offset": 256, 00:21:02.134 "data_size": 7936 00:21:02.134 } 00:21:02.134 ] 00:21:02.134 }' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.134 20:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.070 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.329 "name": "raid_bdev1", 00:21:03.329 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:03.329 "strip_size_kb": 0, 00:21:03.329 "state": "online", 00:21:03.329 "raid_level": "raid1", 00:21:03.329 "superblock": true, 00:21:03.329 "num_base_bdevs": 2, 00:21:03.329 "num_base_bdevs_discovered": 2, 00:21:03.329 "num_base_bdevs_operational": 2, 00:21:03.329 "process": { 00:21:03.329 "type": "rebuild", 00:21:03.329 "target": "spare", 00:21:03.329 "progress": { 00:21:03.329 "blocks": 5632, 00:21:03.329 "percent": 70 00:21:03.329 } 00:21:03.329 }, 00:21:03.329 "base_bdevs_list": [ 00:21:03.329 { 00:21:03.329 "name": "spare", 00:21:03.329 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:03.329 "is_configured": true, 00:21:03.329 "data_offset": 256, 00:21:03.329 "data_size": 7936 00:21:03.329 }, 00:21:03.329 { 00:21:03.329 "name": "BaseBdev2", 00:21:03.329 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:03.329 "is_configured": true, 00:21:03.329 "data_offset": 256, 00:21:03.329 "data_size": 7936 00:21:03.329 } 00:21:03.329 ] 00:21:03.329 }' 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.329 20:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:03.897 [2024-11-26 20:33:57.446850] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:03.897 [2024-11-26 20:33:57.447058] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:03.897 [2024-11-26 20:33:57.447242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.464 "name": "raid_bdev1", 00:21:04.464 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:04.464 "strip_size_kb": 0, 00:21:04.464 "state": "online", 00:21:04.464 "raid_level": "raid1", 00:21:04.464 "superblock": true, 00:21:04.464 "num_base_bdevs": 2, 00:21:04.464 "num_base_bdevs_discovered": 2, 00:21:04.464 "num_base_bdevs_operational": 2, 00:21:04.464 "base_bdevs_list": [ 00:21:04.464 { 00:21:04.464 "name": "spare", 00:21:04.464 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:04.464 "is_configured": true, 00:21:04.464 "data_offset": 256, 00:21:04.464 "data_size": 7936 00:21:04.464 }, 00:21:04.464 { 00:21:04.464 "name": "BaseBdev2", 00:21:04.464 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:04.464 "is_configured": true, 00:21:04.464 "data_offset": 256, 00:21:04.464 "data_size": 7936 00:21:04.464 } 00:21:04.464 ] 00:21:04.464 }' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.464 "name": "raid_bdev1", 00:21:04.464 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:04.464 "strip_size_kb": 0, 00:21:04.464 "state": "online", 00:21:04.464 "raid_level": "raid1", 00:21:04.464 "superblock": true, 00:21:04.464 "num_base_bdevs": 2, 00:21:04.464 "num_base_bdevs_discovered": 2, 00:21:04.464 "num_base_bdevs_operational": 2, 00:21:04.464 "base_bdevs_list": [ 00:21:04.464 { 00:21:04.464 "name": "spare", 00:21:04.464 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:04.464 "is_configured": true, 00:21:04.464 "data_offset": 256, 00:21:04.464 "data_size": 7936 00:21:04.464 }, 00:21:04.464 { 00:21:04.464 "name": "BaseBdev2", 00:21:04.464 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:04.464 "is_configured": true, 00:21:04.464 "data_offset": 256, 00:21:04.464 "data_size": 7936 00:21:04.464 } 00:21:04.464 ] 00:21:04.464 }' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.464 20:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.722 "name": "raid_bdev1", 00:21:04.722 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:04.722 "strip_size_kb": 0, 00:21:04.722 "state": "online", 00:21:04.722 "raid_level": "raid1", 00:21:04.722 "superblock": true, 00:21:04.722 "num_base_bdevs": 2, 00:21:04.722 "num_base_bdevs_discovered": 2, 00:21:04.722 "num_base_bdevs_operational": 2, 00:21:04.722 "base_bdevs_list": [ 00:21:04.722 { 00:21:04.722 "name": "spare", 00:21:04.722 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:04.722 "is_configured": true, 00:21:04.722 "data_offset": 256, 00:21:04.722 "data_size": 7936 00:21:04.722 }, 00:21:04.722 { 00:21:04.722 "name": "BaseBdev2", 00:21:04.722 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:04.722 "is_configured": true, 00:21:04.722 "data_offset": 256, 00:21:04.722 "data_size": 7936 00:21:04.722 } 00:21:04.722 ] 00:21:04.722 }' 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.722 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.981 [2024-11-26 20:33:58.408394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.981 [2024-11-26 20:33:58.408428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.981 [2024-11-26 20:33:58.408519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.981 [2024-11-26 20:33:58.408588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.981 [2024-11-26 20:33:58.408601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:04.981 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:04.982 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:04.982 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:04.982 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:05.241 /dev/nbd0 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.241 1+0 records in 00:21:05.241 1+0 records out 00:21:05.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328888 s, 12.5 MB/s 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.241 20:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:05.501 /dev/nbd1 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.501 1+0 records in 00:21:05.501 1+0 records out 00:21:05.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456494 s, 9.0 MB/s 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.501 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.759 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.017 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.277 [2024-11-26 20:33:59.754692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.277 [2024-11-26 20:33:59.754761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.277 [2024-11-26 20:33:59.754788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:06.277 [2024-11-26 20:33:59.754799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.277 [2024-11-26 20:33:59.757382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.277 [2024-11-26 20:33:59.757426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.277 [2024-11-26 20:33:59.757551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:06.277 [2024-11-26 20:33:59.757615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.277 [2024-11-26 20:33:59.757787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.277 spare 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.277 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.536 [2024-11-26 20:33:59.857717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:06.536 [2024-11-26 20:33:59.857788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:06.536 [2024-11-26 20:33:59.858168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:06.536 [2024-11-26 20:33:59.858415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:06.536 [2024-11-26 20:33:59.858432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:06.536 [2024-11-26 20:33:59.858684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.536 "name": "raid_bdev1", 00:21:06.536 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:06.536 "strip_size_kb": 0, 00:21:06.536 "state": "online", 00:21:06.536 "raid_level": "raid1", 00:21:06.536 "superblock": true, 00:21:06.536 "num_base_bdevs": 2, 00:21:06.536 "num_base_bdevs_discovered": 2, 00:21:06.536 "num_base_bdevs_operational": 2, 00:21:06.536 "base_bdevs_list": [ 00:21:06.536 { 00:21:06.536 "name": "spare", 00:21:06.536 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:06.536 "is_configured": true, 00:21:06.536 "data_offset": 256, 00:21:06.536 "data_size": 7936 00:21:06.536 }, 00:21:06.536 { 00:21:06.536 "name": "BaseBdev2", 00:21:06.536 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:06.536 "is_configured": true, 00:21:06.536 "data_offset": 256, 00:21:06.536 "data_size": 7936 00:21:06.536 } 00:21:06.536 ] 00:21:06.536 }' 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.536 20:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.795 "name": "raid_bdev1", 00:21:06.795 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:06.795 "strip_size_kb": 0, 00:21:06.795 "state": "online", 00:21:06.795 "raid_level": "raid1", 00:21:06.795 "superblock": true, 00:21:06.795 "num_base_bdevs": 2, 00:21:06.795 "num_base_bdevs_discovered": 2, 00:21:06.795 "num_base_bdevs_operational": 2, 00:21:06.795 "base_bdevs_list": [ 00:21:06.795 { 00:21:06.795 "name": "spare", 00:21:06.795 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:06.795 "is_configured": true, 00:21:06.795 "data_offset": 256, 00:21:06.795 "data_size": 7936 00:21:06.795 }, 00:21:06.795 { 00:21:06.795 "name": "BaseBdev2", 00:21:06.795 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:06.795 "is_configured": true, 00:21:06.795 "data_offset": 256, 00:21:06.795 "data_size": 7936 00:21:06.795 } 00:21:06.795 ] 00:21:06.795 }' 00:21:06.795 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.053 [2024-11-26 20:34:00.461650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.053 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.054 "name": "raid_bdev1", 00:21:07.054 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:07.054 "strip_size_kb": 0, 00:21:07.054 "state": "online", 00:21:07.054 "raid_level": "raid1", 00:21:07.054 "superblock": true, 00:21:07.054 "num_base_bdevs": 2, 00:21:07.054 "num_base_bdevs_discovered": 1, 00:21:07.054 "num_base_bdevs_operational": 1, 00:21:07.054 "base_bdevs_list": [ 00:21:07.054 { 00:21:07.054 "name": null, 00:21:07.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.054 "is_configured": false, 00:21:07.054 "data_offset": 0, 00:21:07.054 "data_size": 7936 00:21:07.054 }, 00:21:07.054 { 00:21:07.054 "name": "BaseBdev2", 00:21:07.054 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:07.054 "is_configured": true, 00:21:07.054 "data_offset": 256, 00:21:07.054 "data_size": 7936 00:21:07.054 } 00:21:07.054 ] 00:21:07.054 }' 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.054 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.617 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:07.617 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.617 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.617 [2024-11-26 20:34:00.889016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.617 [2024-11-26 20:34:00.889346] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:07.617 [2024-11-26 20:34:00.889422] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:07.617 [2024-11-26 20:34:00.889508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.617 [2024-11-26 20:34:00.905306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:07.617 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.617 20:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:07.617 [2024-11-26 20:34:00.907236] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.553 "name": "raid_bdev1", 00:21:08.553 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:08.553 "strip_size_kb": 0, 00:21:08.553 "state": "online", 00:21:08.553 "raid_level": "raid1", 00:21:08.553 "superblock": true, 00:21:08.553 "num_base_bdevs": 2, 00:21:08.553 "num_base_bdevs_discovered": 2, 00:21:08.553 "num_base_bdevs_operational": 2, 00:21:08.553 "process": { 00:21:08.553 "type": "rebuild", 00:21:08.553 "target": "spare", 00:21:08.553 "progress": { 00:21:08.553 "blocks": 2560, 00:21:08.553 "percent": 32 00:21:08.553 } 00:21:08.553 }, 00:21:08.553 "base_bdevs_list": [ 00:21:08.553 { 00:21:08.553 "name": "spare", 00:21:08.553 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:08.553 "is_configured": true, 00:21:08.553 "data_offset": 256, 00:21:08.553 "data_size": 7936 00:21:08.553 }, 00:21:08.553 { 00:21:08.553 "name": "BaseBdev2", 00:21:08.553 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:08.553 "is_configured": true, 00:21:08.553 "data_offset": 256, 00:21:08.553 "data_size": 7936 00:21:08.553 } 00:21:08.553 ] 00:21:08.553 }' 00:21:08.553 20:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.553 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.553 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.553 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.553 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:08.553 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.553 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.553 [2024-11-26 20:34:02.063500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.811 [2024-11-26 20:34:02.113142] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:08.811 [2024-11-26 20:34:02.113213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.811 [2024-11-26 20:34:02.113229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.811 [2024-11-26 20:34:02.113252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.811 "name": "raid_bdev1", 00:21:08.811 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:08.811 "strip_size_kb": 0, 00:21:08.811 "state": "online", 00:21:08.811 "raid_level": "raid1", 00:21:08.811 "superblock": true, 00:21:08.811 "num_base_bdevs": 2, 00:21:08.811 "num_base_bdevs_discovered": 1, 00:21:08.811 "num_base_bdevs_operational": 1, 00:21:08.811 "base_bdevs_list": [ 00:21:08.811 { 00:21:08.811 "name": null, 00:21:08.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.811 "is_configured": false, 00:21:08.811 "data_offset": 0, 00:21:08.811 "data_size": 7936 00:21:08.811 }, 00:21:08.811 { 00:21:08.811 "name": "BaseBdev2", 00:21:08.811 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:08.811 "is_configured": true, 00:21:08.811 "data_offset": 256, 00:21:08.811 "data_size": 7936 00:21:08.811 } 00:21:08.811 ] 00:21:08.811 }' 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.811 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.375 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:09.375 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.375 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.375 [2024-11-26 20:34:02.673868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:09.375 [2024-11-26 20:34:02.673952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.375 [2024-11-26 20:34:02.673977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:09.375 [2024-11-26 20:34:02.673990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.375 [2024-11-26 20:34:02.674553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.375 [2024-11-26 20:34:02.674596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:09.375 [2024-11-26 20:34:02.674702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:09.375 [2024-11-26 20:34:02.674721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:09.375 [2024-11-26 20:34:02.674733] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:09.375 [2024-11-26 20:34:02.674761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.375 [2024-11-26 20:34:02.692287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:09.375 spare 00:21:09.375 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.375 20:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:09.375 [2024-11-26 20:34:02.694480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.307 "name": "raid_bdev1", 00:21:10.307 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:10.307 "strip_size_kb": 0, 00:21:10.307 "state": "online", 00:21:10.307 "raid_level": "raid1", 00:21:10.307 "superblock": true, 00:21:10.307 "num_base_bdevs": 2, 00:21:10.307 "num_base_bdevs_discovered": 2, 00:21:10.307 "num_base_bdevs_operational": 2, 00:21:10.307 "process": { 00:21:10.307 "type": "rebuild", 00:21:10.307 "target": "spare", 00:21:10.307 "progress": { 00:21:10.307 "blocks": 2560, 00:21:10.307 "percent": 32 00:21:10.307 } 00:21:10.307 }, 00:21:10.307 "base_bdevs_list": [ 00:21:10.307 { 00:21:10.307 "name": "spare", 00:21:10.307 "uuid": "bc6761c1-9224-57f7-bd33-0506cccccd8c", 00:21:10.307 "is_configured": true, 00:21:10.307 "data_offset": 256, 00:21:10.307 "data_size": 7936 00:21:10.307 }, 00:21:10.307 { 00:21:10.307 "name": "BaseBdev2", 00:21:10.307 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:10.307 "is_configured": true, 00:21:10.307 "data_offset": 256, 00:21:10.307 "data_size": 7936 00:21:10.307 } 00:21:10.307 ] 00:21:10.307 }' 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.307 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.307 [2024-11-26 20:34:03.825468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.565 [2024-11-26 20:34:03.901011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:10.565 [2024-11-26 20:34:03.901099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.565 [2024-11-26 20:34:03.901117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.565 [2024-11-26 20:34:03.901126] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.565 "name": "raid_bdev1", 00:21:10.565 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:10.565 "strip_size_kb": 0, 00:21:10.565 "state": "online", 00:21:10.565 "raid_level": "raid1", 00:21:10.565 "superblock": true, 00:21:10.565 "num_base_bdevs": 2, 00:21:10.565 "num_base_bdevs_discovered": 1, 00:21:10.565 "num_base_bdevs_operational": 1, 00:21:10.565 "base_bdevs_list": [ 00:21:10.565 { 00:21:10.565 "name": null, 00:21:10.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.565 "is_configured": false, 00:21:10.565 "data_offset": 0, 00:21:10.565 "data_size": 7936 00:21:10.565 }, 00:21:10.565 { 00:21:10.565 "name": "BaseBdev2", 00:21:10.565 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:10.565 "is_configured": true, 00:21:10.565 "data_offset": 256, 00:21:10.565 "data_size": 7936 00:21:10.565 } 00:21:10.565 ] 00:21:10.565 }' 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.565 20:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.130 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.130 "name": "raid_bdev1", 00:21:11.130 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:11.130 "strip_size_kb": 0, 00:21:11.130 "state": "online", 00:21:11.130 "raid_level": "raid1", 00:21:11.130 "superblock": true, 00:21:11.130 "num_base_bdevs": 2, 00:21:11.130 "num_base_bdevs_discovered": 1, 00:21:11.130 "num_base_bdevs_operational": 1, 00:21:11.130 "base_bdevs_list": [ 00:21:11.130 { 00:21:11.130 "name": null, 00:21:11.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.130 "is_configured": false, 00:21:11.130 "data_offset": 0, 00:21:11.130 "data_size": 7936 00:21:11.130 }, 00:21:11.130 { 00:21:11.130 "name": "BaseBdev2", 00:21:11.130 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:11.130 "is_configured": true, 00:21:11.131 "data_offset": 256, 00:21:11.131 "data_size": 7936 00:21:11.131 } 00:21:11.131 ] 00:21:11.131 }' 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.131 [2024-11-26 20:34:04.553264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.131 [2024-11-26 20:34:04.553323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.131 [2024-11-26 20:34:04.553352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:11.131 [2024-11-26 20:34:04.553375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.131 [2024-11-26 20:34:04.553877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.131 [2024-11-26 20:34:04.553895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.131 [2024-11-26 20:34:04.553983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:11.131 [2024-11-26 20:34:04.553998] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:11.131 [2024-11-26 20:34:04.554009] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:11.131 [2024-11-26 20:34:04.554020] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:11.131 BaseBdev1 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.131 20:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.063 "name": "raid_bdev1", 00:21:12.063 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:12.063 "strip_size_kb": 0, 00:21:12.063 "state": "online", 00:21:12.063 "raid_level": "raid1", 00:21:12.063 "superblock": true, 00:21:12.063 "num_base_bdevs": 2, 00:21:12.063 "num_base_bdevs_discovered": 1, 00:21:12.063 "num_base_bdevs_operational": 1, 00:21:12.063 "base_bdevs_list": [ 00:21:12.063 { 00:21:12.063 "name": null, 00:21:12.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.063 "is_configured": false, 00:21:12.063 "data_offset": 0, 00:21:12.063 "data_size": 7936 00:21:12.063 }, 00:21:12.063 { 00:21:12.063 "name": "BaseBdev2", 00:21:12.063 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:12.063 "is_configured": true, 00:21:12.063 "data_offset": 256, 00:21:12.063 "data_size": 7936 00:21:12.063 } 00:21:12.063 ] 00:21:12.063 }' 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.063 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.629 "name": "raid_bdev1", 00:21:12.629 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:12.629 "strip_size_kb": 0, 00:21:12.629 "state": "online", 00:21:12.629 "raid_level": "raid1", 00:21:12.629 "superblock": true, 00:21:12.629 "num_base_bdevs": 2, 00:21:12.629 "num_base_bdevs_discovered": 1, 00:21:12.629 "num_base_bdevs_operational": 1, 00:21:12.629 "base_bdevs_list": [ 00:21:12.629 { 00:21:12.629 "name": null, 00:21:12.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.629 "is_configured": false, 00:21:12.629 "data_offset": 0, 00:21:12.629 "data_size": 7936 00:21:12.629 }, 00:21:12.629 { 00:21:12.629 "name": "BaseBdev2", 00:21:12.629 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:12.629 "is_configured": true, 00:21:12.629 "data_offset": 256, 00:21:12.629 "data_size": 7936 00:21:12.629 } 00:21:12.629 ] 00:21:12.629 }' 00:21:12.629 20:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:12.629 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.630 [2024-11-26 20:34:06.058831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.630 [2024-11-26 20:34:06.059046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:12.630 [2024-11-26 20:34:06.059077] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:12.630 request: 00:21:12.630 { 00:21:12.630 "base_bdev": "BaseBdev1", 00:21:12.630 "raid_bdev": "raid_bdev1", 00:21:12.630 "method": "bdev_raid_add_base_bdev", 00:21:12.630 "req_id": 1 00:21:12.630 } 00:21:12.630 Got JSON-RPC error response 00:21:12.630 response: 00:21:12.630 { 00:21:12.630 "code": -22, 00:21:12.630 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:12.630 } 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.630 20:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.565 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.824 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.824 "name": "raid_bdev1", 00:21:13.824 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:13.824 "strip_size_kb": 0, 00:21:13.824 "state": "online", 00:21:13.824 "raid_level": "raid1", 00:21:13.824 "superblock": true, 00:21:13.824 "num_base_bdevs": 2, 00:21:13.824 "num_base_bdevs_discovered": 1, 00:21:13.824 "num_base_bdevs_operational": 1, 00:21:13.824 "base_bdevs_list": [ 00:21:13.824 { 00:21:13.824 "name": null, 00:21:13.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.824 "is_configured": false, 00:21:13.824 "data_offset": 0, 00:21:13.824 "data_size": 7936 00:21:13.824 }, 00:21:13.824 { 00:21:13.824 "name": "BaseBdev2", 00:21:13.824 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:13.824 "is_configured": true, 00:21:13.824 "data_offset": 256, 00:21:13.824 "data_size": 7936 00:21:13.824 } 00:21:13.824 ] 00:21:13.824 }' 00:21:13.824 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.824 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.082 "name": "raid_bdev1", 00:21:14.082 "uuid": "09abd897-3312-4964-934c-b8fbd9c201bd", 00:21:14.082 "strip_size_kb": 0, 00:21:14.082 "state": "online", 00:21:14.082 "raid_level": "raid1", 00:21:14.082 "superblock": true, 00:21:14.082 "num_base_bdevs": 2, 00:21:14.082 "num_base_bdevs_discovered": 1, 00:21:14.082 "num_base_bdevs_operational": 1, 00:21:14.082 "base_bdevs_list": [ 00:21:14.082 { 00:21:14.082 "name": null, 00:21:14.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.082 "is_configured": false, 00:21:14.082 "data_offset": 0, 00:21:14.082 "data_size": 7936 00:21:14.082 }, 00:21:14.082 { 00:21:14.082 "name": "BaseBdev2", 00:21:14.082 "uuid": "b2f192de-7cb4-54ec-bee4-3917e64e4777", 00:21:14.082 "is_configured": true, 00:21:14.082 "data_offset": 256, 00:21:14.082 "data_size": 7936 00:21:14.082 } 00:21:14.082 ] 00:21:14.082 }' 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86985 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86985 ']' 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86985 00:21:14.082 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86985 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86985' 00:21:14.083 killing process with pid 86985 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86985 00:21:14.083 Received shutdown signal, test time was about 60.000000 seconds 00:21:14.083 00:21:14.083 Latency(us) 00:21:14.083 [2024-11-26T20:34:07.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.083 [2024-11-26T20:34:07.638Z] =================================================================================================================== 00:21:14.083 [2024-11-26T20:34:07.638Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.083 [2024-11-26 20:34:07.579974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.083 20:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86985 00:21:14.083 [2024-11-26 20:34:07.580146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.083 [2024-11-26 20:34:07.580210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.083 [2024-11-26 20:34:07.580228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:14.650 [2024-11-26 20:34:07.925647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:16.023 20:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:21:16.023 00:21:16.023 real 0m20.097s 00:21:16.023 user 0m26.080s 00:21:16.023 sys 0m2.492s 00:21:16.023 20:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.023 20:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.023 ************************************ 00:21:16.023 END TEST raid_rebuild_test_sb_4k 00:21:16.023 ************************************ 00:21:16.023 20:34:09 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:21:16.023 20:34:09 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:16.023 20:34:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:16.023 20:34:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.023 20:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:16.023 ************************************ 00:21:16.023 START TEST raid_state_function_test_sb_md_separate 00:21:16.023 ************************************ 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87681 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87681' 00:21:16.023 Process raid pid: 87681 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87681 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87681 ']' 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.023 20:34:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.023 [2024-11-26 20:34:09.339805] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:21:16.023 [2024-11-26 20:34:09.340462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.023 [2024-11-26 20:34:09.520906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.282 [2024-11-26 20:34:09.655168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.572 [2024-11-26 20:34:09.893667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.572 [2024-11-26 20:34:09.893714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.831 [2024-11-26 20:34:10.251076] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.831 [2024-11-26 20:34:10.251133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.831 [2024-11-26 20:34:10.251147] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.831 [2024-11-26 20:34:10.251158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.831 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.832 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.832 "name": "Existed_Raid", 00:21:16.832 "uuid": "90b2e1a4-4640-4f4c-8126-333564a5e17f", 00:21:16.832 "strip_size_kb": 0, 00:21:16.832 "state": "configuring", 00:21:16.832 "raid_level": "raid1", 00:21:16.832 "superblock": true, 00:21:16.832 "num_base_bdevs": 2, 00:21:16.832 "num_base_bdevs_discovered": 0, 00:21:16.832 "num_base_bdevs_operational": 2, 00:21:16.832 "base_bdevs_list": [ 00:21:16.832 { 00:21:16.832 "name": "BaseBdev1", 00:21:16.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.832 "is_configured": false, 00:21:16.832 "data_offset": 0, 00:21:16.832 "data_size": 0 00:21:16.832 }, 00:21:16.832 { 00:21:16.832 "name": "BaseBdev2", 00:21:16.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.832 "is_configured": false, 00:21:16.832 "data_offset": 0, 00:21:16.832 "data_size": 0 00:21:16.832 } 00:21:16.832 ] 00:21:16.832 }' 00:21:16.832 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.832 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.415 [2024-11-26 20:34:10.674338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.415 [2024-11-26 20:34:10.674383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.415 [2024-11-26 20:34:10.682320] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.415 [2024-11-26 20:34:10.682360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.415 [2024-11-26 20:34:10.682370] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.415 [2024-11-26 20:34:10.682384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.415 [2024-11-26 20:34:10.734216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.415 BaseBdev1 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.415 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.415 [ 00:21:17.415 { 00:21:17.415 "name": "BaseBdev1", 00:21:17.415 "aliases": [ 00:21:17.415 "bccd8ec1-60bf-4fbb-b55e-9605c973bc00" 00:21:17.415 ], 00:21:17.415 "product_name": "Malloc disk", 00:21:17.415 "block_size": 4096, 00:21:17.415 "num_blocks": 8192, 00:21:17.415 "uuid": "bccd8ec1-60bf-4fbb-b55e-9605c973bc00", 00:21:17.415 "md_size": 32, 00:21:17.415 "md_interleave": false, 00:21:17.415 "dif_type": 0, 00:21:17.415 "assigned_rate_limits": { 00:21:17.415 "rw_ios_per_sec": 0, 00:21:17.415 "rw_mbytes_per_sec": 0, 00:21:17.415 "r_mbytes_per_sec": 0, 00:21:17.415 "w_mbytes_per_sec": 0 00:21:17.415 }, 00:21:17.415 "claimed": true, 00:21:17.415 "claim_type": "exclusive_write", 00:21:17.415 "zoned": false, 00:21:17.415 "supported_io_types": { 00:21:17.415 "read": true, 00:21:17.415 "write": true, 00:21:17.415 "unmap": true, 00:21:17.415 "flush": true, 00:21:17.415 "reset": true, 00:21:17.415 "nvme_admin": false, 00:21:17.415 "nvme_io": false, 00:21:17.415 "nvme_io_md": false, 00:21:17.415 "write_zeroes": true, 00:21:17.415 "zcopy": true, 00:21:17.415 "get_zone_info": false, 00:21:17.415 "zone_management": false, 00:21:17.415 "zone_append": false, 00:21:17.415 "compare": false, 00:21:17.415 "compare_and_write": false, 00:21:17.415 "abort": true, 00:21:17.415 "seek_hole": false, 00:21:17.415 "seek_data": false, 00:21:17.415 "copy": true, 00:21:17.415 "nvme_iov_md": false 00:21:17.415 }, 00:21:17.415 "memory_domains": [ 00:21:17.415 { 00:21:17.416 "dma_device_id": "system", 00:21:17.416 "dma_device_type": 1 00:21:17.416 }, 00:21:17.416 { 00:21:17.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.416 "dma_device_type": 2 00:21:17.416 } 00:21:17.416 ], 00:21:17.416 "driver_specific": {} 00:21:17.416 } 00:21:17.416 ] 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.416 "name": "Existed_Raid", 00:21:17.416 "uuid": "e70a88f4-a4a8-483c-92e4-176f637bfb1a", 00:21:17.416 "strip_size_kb": 0, 00:21:17.416 "state": "configuring", 00:21:17.416 "raid_level": "raid1", 00:21:17.416 "superblock": true, 00:21:17.416 "num_base_bdevs": 2, 00:21:17.416 "num_base_bdevs_discovered": 1, 00:21:17.416 "num_base_bdevs_operational": 2, 00:21:17.416 "base_bdevs_list": [ 00:21:17.416 { 00:21:17.416 "name": "BaseBdev1", 00:21:17.416 "uuid": "bccd8ec1-60bf-4fbb-b55e-9605c973bc00", 00:21:17.416 "is_configured": true, 00:21:17.416 "data_offset": 256, 00:21:17.416 "data_size": 7936 00:21:17.416 }, 00:21:17.416 { 00:21:17.416 "name": "BaseBdev2", 00:21:17.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.416 "is_configured": false, 00:21:17.416 "data_offset": 0, 00:21:17.416 "data_size": 0 00:21:17.416 } 00:21:17.416 ] 00:21:17.416 }' 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.416 20:34:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.675 [2024-11-26 20:34:11.149610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.675 [2024-11-26 20:34:11.149678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.675 [2024-11-26 20:34:11.161616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.675 [2024-11-26 20:34:11.163694] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.675 [2024-11-26 20:34:11.163741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.675 "name": "Existed_Raid", 00:21:17.675 "uuid": "3fe310d6-1899-40c2-b3ba-5d0cf4639ec9", 00:21:17.675 "strip_size_kb": 0, 00:21:17.675 "state": "configuring", 00:21:17.675 "raid_level": "raid1", 00:21:17.675 "superblock": true, 00:21:17.675 "num_base_bdevs": 2, 00:21:17.675 "num_base_bdevs_discovered": 1, 00:21:17.675 "num_base_bdevs_operational": 2, 00:21:17.675 "base_bdevs_list": [ 00:21:17.675 { 00:21:17.675 "name": "BaseBdev1", 00:21:17.675 "uuid": "bccd8ec1-60bf-4fbb-b55e-9605c973bc00", 00:21:17.675 "is_configured": true, 00:21:17.675 "data_offset": 256, 00:21:17.675 "data_size": 7936 00:21:17.675 }, 00:21:17.675 { 00:21:17.675 "name": "BaseBdev2", 00:21:17.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.675 "is_configured": false, 00:21:17.675 "data_offset": 0, 00:21:17.675 "data_size": 0 00:21:17.675 } 00:21:17.675 ] 00:21:17.675 }' 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.675 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.244 [2024-11-26 20:34:11.635627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.244 [2024-11-26 20:34:11.635896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:18.244 [2024-11-26 20:34:11.635934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:18.244 [2024-11-26 20:34:11.636026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:18.244 [2024-11-26 20:34:11.636193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:18.244 [2024-11-26 20:34:11.636215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:18.244 [2024-11-26 20:34:11.636375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.244 BaseBdev2 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.244 [ 00:21:18.244 { 00:21:18.244 "name": "BaseBdev2", 00:21:18.244 "aliases": [ 00:21:18.244 "9d8998bd-86d2-4ee7-a2a4-0cdbcad82bff" 00:21:18.244 ], 00:21:18.244 "product_name": "Malloc disk", 00:21:18.244 "block_size": 4096, 00:21:18.244 "num_blocks": 8192, 00:21:18.244 "uuid": "9d8998bd-86d2-4ee7-a2a4-0cdbcad82bff", 00:21:18.244 "md_size": 32, 00:21:18.244 "md_interleave": false, 00:21:18.244 "dif_type": 0, 00:21:18.244 "assigned_rate_limits": { 00:21:18.244 "rw_ios_per_sec": 0, 00:21:18.244 "rw_mbytes_per_sec": 0, 00:21:18.244 "r_mbytes_per_sec": 0, 00:21:18.244 "w_mbytes_per_sec": 0 00:21:18.244 }, 00:21:18.244 "claimed": true, 00:21:18.244 "claim_type": "exclusive_write", 00:21:18.244 "zoned": false, 00:21:18.244 "supported_io_types": { 00:21:18.244 "read": true, 00:21:18.244 "write": true, 00:21:18.244 "unmap": true, 00:21:18.244 "flush": true, 00:21:18.244 "reset": true, 00:21:18.244 "nvme_admin": false, 00:21:18.244 "nvme_io": false, 00:21:18.244 "nvme_io_md": false, 00:21:18.244 "write_zeroes": true, 00:21:18.244 "zcopy": true, 00:21:18.244 "get_zone_info": false, 00:21:18.244 "zone_management": false, 00:21:18.244 "zone_append": false, 00:21:18.244 "compare": false, 00:21:18.244 "compare_and_write": false, 00:21:18.244 "abort": true, 00:21:18.244 "seek_hole": false, 00:21:18.244 "seek_data": false, 00:21:18.244 "copy": true, 00:21:18.244 "nvme_iov_md": false 00:21:18.244 }, 00:21:18.244 "memory_domains": [ 00:21:18.244 { 00:21:18.244 "dma_device_id": "system", 00:21:18.244 "dma_device_type": 1 00:21:18.244 }, 00:21:18.244 { 00:21:18.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.244 "dma_device_type": 2 00:21:18.244 } 00:21:18.244 ], 00:21:18.244 "driver_specific": {} 00:21:18.244 } 00:21:18.244 ] 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.244 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.244 "name": "Existed_Raid", 00:21:18.244 "uuid": "3fe310d6-1899-40c2-b3ba-5d0cf4639ec9", 00:21:18.244 "strip_size_kb": 0, 00:21:18.245 "state": "online", 00:21:18.245 "raid_level": "raid1", 00:21:18.245 "superblock": true, 00:21:18.245 "num_base_bdevs": 2, 00:21:18.245 "num_base_bdevs_discovered": 2, 00:21:18.245 "num_base_bdevs_operational": 2, 00:21:18.245 "base_bdevs_list": [ 00:21:18.245 { 00:21:18.245 "name": "BaseBdev1", 00:21:18.245 "uuid": "bccd8ec1-60bf-4fbb-b55e-9605c973bc00", 00:21:18.245 "is_configured": true, 00:21:18.245 "data_offset": 256, 00:21:18.245 "data_size": 7936 00:21:18.245 }, 00:21:18.245 { 00:21:18.245 "name": "BaseBdev2", 00:21:18.245 "uuid": "9d8998bd-86d2-4ee7-a2a4-0cdbcad82bff", 00:21:18.245 "is_configured": true, 00:21:18.245 "data_offset": 256, 00:21:18.245 "data_size": 7936 00:21:18.245 } 00:21:18.245 ] 00:21:18.245 }' 00:21:18.245 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.245 20:34:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.812 [2024-11-26 20:34:12.067349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.812 "name": "Existed_Raid", 00:21:18.812 "aliases": [ 00:21:18.812 "3fe310d6-1899-40c2-b3ba-5d0cf4639ec9" 00:21:18.812 ], 00:21:18.812 "product_name": "Raid Volume", 00:21:18.812 "block_size": 4096, 00:21:18.812 "num_blocks": 7936, 00:21:18.812 "uuid": "3fe310d6-1899-40c2-b3ba-5d0cf4639ec9", 00:21:18.812 "md_size": 32, 00:21:18.812 "md_interleave": false, 00:21:18.812 "dif_type": 0, 00:21:18.812 "assigned_rate_limits": { 00:21:18.812 "rw_ios_per_sec": 0, 00:21:18.812 "rw_mbytes_per_sec": 0, 00:21:18.812 "r_mbytes_per_sec": 0, 00:21:18.812 "w_mbytes_per_sec": 0 00:21:18.812 }, 00:21:18.812 "claimed": false, 00:21:18.812 "zoned": false, 00:21:18.812 "supported_io_types": { 00:21:18.812 "read": true, 00:21:18.812 "write": true, 00:21:18.812 "unmap": false, 00:21:18.812 "flush": false, 00:21:18.812 "reset": true, 00:21:18.812 "nvme_admin": false, 00:21:18.812 "nvme_io": false, 00:21:18.812 "nvme_io_md": false, 00:21:18.812 "write_zeroes": true, 00:21:18.812 "zcopy": false, 00:21:18.812 "get_zone_info": false, 00:21:18.812 "zone_management": false, 00:21:18.812 "zone_append": false, 00:21:18.812 "compare": false, 00:21:18.812 "compare_and_write": false, 00:21:18.812 "abort": false, 00:21:18.812 "seek_hole": false, 00:21:18.812 "seek_data": false, 00:21:18.812 "copy": false, 00:21:18.812 "nvme_iov_md": false 00:21:18.812 }, 00:21:18.812 "memory_domains": [ 00:21:18.812 { 00:21:18.812 "dma_device_id": "system", 00:21:18.812 "dma_device_type": 1 00:21:18.812 }, 00:21:18.812 { 00:21:18.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.812 "dma_device_type": 2 00:21:18.812 }, 00:21:18.812 { 00:21:18.812 "dma_device_id": "system", 00:21:18.812 "dma_device_type": 1 00:21:18.812 }, 00:21:18.812 { 00:21:18.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.812 "dma_device_type": 2 00:21:18.812 } 00:21:18.812 ], 00:21:18.812 "driver_specific": { 00:21:18.812 "raid": { 00:21:18.812 "uuid": "3fe310d6-1899-40c2-b3ba-5d0cf4639ec9", 00:21:18.812 "strip_size_kb": 0, 00:21:18.812 "state": "online", 00:21:18.812 "raid_level": "raid1", 00:21:18.812 "superblock": true, 00:21:18.812 "num_base_bdevs": 2, 00:21:18.812 "num_base_bdevs_discovered": 2, 00:21:18.812 "num_base_bdevs_operational": 2, 00:21:18.812 "base_bdevs_list": [ 00:21:18.812 { 00:21:18.812 "name": "BaseBdev1", 00:21:18.812 "uuid": "bccd8ec1-60bf-4fbb-b55e-9605c973bc00", 00:21:18.812 "is_configured": true, 00:21:18.812 "data_offset": 256, 00:21:18.812 "data_size": 7936 00:21:18.812 }, 00:21:18.812 { 00:21:18.812 "name": "BaseBdev2", 00:21:18.812 "uuid": "9d8998bd-86d2-4ee7-a2a4-0cdbcad82bff", 00:21:18.812 "is_configured": true, 00:21:18.812 "data_offset": 256, 00:21:18.812 "data_size": 7936 00:21:18.812 } 00:21:18.812 ] 00:21:18.812 } 00:21:18.812 } 00:21:18.812 }' 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:18.812 BaseBdev2' 00:21:18.812 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.813 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.813 [2024-11-26 20:34:12.278717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.073 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.074 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.074 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.074 "name": "Existed_Raid", 00:21:19.074 "uuid": "3fe310d6-1899-40c2-b3ba-5d0cf4639ec9", 00:21:19.074 "strip_size_kb": 0, 00:21:19.074 "state": "online", 00:21:19.074 "raid_level": "raid1", 00:21:19.074 "superblock": true, 00:21:19.074 "num_base_bdevs": 2, 00:21:19.074 "num_base_bdevs_discovered": 1, 00:21:19.074 "num_base_bdevs_operational": 1, 00:21:19.074 "base_bdevs_list": [ 00:21:19.074 { 00:21:19.074 "name": null, 00:21:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.074 "is_configured": false, 00:21:19.074 "data_offset": 0, 00:21:19.074 "data_size": 7936 00:21:19.074 }, 00:21:19.074 { 00:21:19.074 "name": "BaseBdev2", 00:21:19.074 "uuid": "9d8998bd-86d2-4ee7-a2a4-0cdbcad82bff", 00:21:19.074 "is_configured": true, 00:21:19.074 "data_offset": 256, 00:21:19.074 "data_size": 7936 00:21:19.074 } 00:21:19.074 ] 00:21:19.074 }' 00:21:19.074 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.074 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.333 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:19.333 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.333 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.333 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.333 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.333 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:19.593 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.593 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:19.593 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:19.593 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:19.593 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.593 20:34:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.593 [2024-11-26 20:34:12.929682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.593 [2024-11-26 20:34:12.929807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.593 [2024-11-26 20:34:13.036953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.593 [2024-11-26 20:34:13.037029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.593 [2024-11-26 20:34:13.037042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:19.593 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87681 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87681 ']' 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87681 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87681 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.594 killing process with pid 87681 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87681' 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87681 00:21:19.594 [2024-11-26 20:34:13.130684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.594 20:34:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87681 00:21:19.853 [2024-11-26 20:34:13.148927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.226 20:34:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:21.226 00:21:21.226 real 0m5.209s 00:21:21.226 user 0m7.379s 00:21:21.226 sys 0m0.782s 00:21:21.226 20:34:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.226 20:34:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.226 ************************************ 00:21:21.226 END TEST raid_state_function_test_sb_md_separate 00:21:21.226 ************************************ 00:21:21.226 20:34:14 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:21.226 20:34:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:21.226 20:34:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.226 20:34:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.226 ************************************ 00:21:21.226 START TEST raid_superblock_test_md_separate 00:21:21.226 ************************************ 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87929 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87929 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87929 ']' 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.226 20:34:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:21.226 [2024-11-26 20:34:14.611898] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:21:21.226 [2024-11-26 20:34:14.612041] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87929 ] 00:21:21.485 [2024-11-26 20:34:14.787210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.485 [2024-11-26 20:34:14.917752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.743 [2024-11-26 20:34:15.157883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.743 [2024-11-26 20:34:15.157932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:22.001 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.002 malloc1 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.002 [2024-11-26 20:34:15.540139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.002 [2024-11-26 20:34:15.540198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.002 [2024-11-26 20:34:15.540225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:22.002 [2024-11-26 20:34:15.540235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.002 [2024-11-26 20:34:15.542469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.002 [2024-11-26 20:34:15.542505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.002 pt1 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.002 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.306 malloc2 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.306 [2024-11-26 20:34:15.602601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:22.306 [2024-11-26 20:34:15.602662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.306 [2024-11-26 20:34:15.602687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:22.306 [2024-11-26 20:34:15.602698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.306 [2024-11-26 20:34:15.604903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.306 [2024-11-26 20:34:15.604943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:22.306 pt2 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.306 [2024-11-26 20:34:15.614600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.306 [2024-11-26 20:34:15.616712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.306 [2024-11-26 20:34:15.616940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:22.306 [2024-11-26 20:34:15.616966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:22.306 [2024-11-26 20:34:15.617072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:22.306 [2024-11-26 20:34:15.617255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:22.306 [2024-11-26 20:34:15.617278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:22.306 [2024-11-26 20:34:15.617412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.306 "name": "raid_bdev1", 00:21:22.306 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:22.306 "strip_size_kb": 0, 00:21:22.306 "state": "online", 00:21:22.306 "raid_level": "raid1", 00:21:22.306 "superblock": true, 00:21:22.306 "num_base_bdevs": 2, 00:21:22.306 "num_base_bdevs_discovered": 2, 00:21:22.306 "num_base_bdevs_operational": 2, 00:21:22.306 "base_bdevs_list": [ 00:21:22.306 { 00:21:22.306 "name": "pt1", 00:21:22.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.306 "is_configured": true, 00:21:22.306 "data_offset": 256, 00:21:22.306 "data_size": 7936 00:21:22.306 }, 00:21:22.306 { 00:21:22.306 "name": "pt2", 00:21:22.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.306 "is_configured": true, 00:21:22.306 "data_offset": 256, 00:21:22.306 "data_size": 7936 00:21:22.306 } 00:21:22.306 ] 00:21:22.306 }' 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.306 20:34:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.565 [2024-11-26 20:34:16.058213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.565 "name": "raid_bdev1", 00:21:22.565 "aliases": [ 00:21:22.565 "af17bcec-0e16-4412-bdd3-30ff0d3c26d2" 00:21:22.565 ], 00:21:22.565 "product_name": "Raid Volume", 00:21:22.565 "block_size": 4096, 00:21:22.565 "num_blocks": 7936, 00:21:22.565 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:22.565 "md_size": 32, 00:21:22.565 "md_interleave": false, 00:21:22.565 "dif_type": 0, 00:21:22.565 "assigned_rate_limits": { 00:21:22.565 "rw_ios_per_sec": 0, 00:21:22.565 "rw_mbytes_per_sec": 0, 00:21:22.565 "r_mbytes_per_sec": 0, 00:21:22.565 "w_mbytes_per_sec": 0 00:21:22.565 }, 00:21:22.565 "claimed": false, 00:21:22.565 "zoned": false, 00:21:22.565 "supported_io_types": { 00:21:22.565 "read": true, 00:21:22.565 "write": true, 00:21:22.565 "unmap": false, 00:21:22.565 "flush": false, 00:21:22.565 "reset": true, 00:21:22.565 "nvme_admin": false, 00:21:22.565 "nvme_io": false, 00:21:22.565 "nvme_io_md": false, 00:21:22.565 "write_zeroes": true, 00:21:22.565 "zcopy": false, 00:21:22.565 "get_zone_info": false, 00:21:22.565 "zone_management": false, 00:21:22.565 "zone_append": false, 00:21:22.565 "compare": false, 00:21:22.565 "compare_and_write": false, 00:21:22.565 "abort": false, 00:21:22.565 "seek_hole": false, 00:21:22.565 "seek_data": false, 00:21:22.565 "copy": false, 00:21:22.565 "nvme_iov_md": false 00:21:22.565 }, 00:21:22.565 "memory_domains": [ 00:21:22.565 { 00:21:22.565 "dma_device_id": "system", 00:21:22.565 "dma_device_type": 1 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.565 "dma_device_type": 2 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "dma_device_id": "system", 00:21:22.565 "dma_device_type": 1 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.565 "dma_device_type": 2 00:21:22.565 } 00:21:22.565 ], 00:21:22.565 "driver_specific": { 00:21:22.565 "raid": { 00:21:22.565 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:22.565 "strip_size_kb": 0, 00:21:22.565 "state": "online", 00:21:22.565 "raid_level": "raid1", 00:21:22.565 "superblock": true, 00:21:22.565 "num_base_bdevs": 2, 00:21:22.565 "num_base_bdevs_discovered": 2, 00:21:22.565 "num_base_bdevs_operational": 2, 00:21:22.565 "base_bdevs_list": [ 00:21:22.565 { 00:21:22.565 "name": "pt1", 00:21:22.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.565 "is_configured": true, 00:21:22.565 "data_offset": 256, 00:21:22.565 "data_size": 7936 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "name": "pt2", 00:21:22.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.565 "is_configured": true, 00:21:22.565 "data_offset": 256, 00:21:22.565 "data_size": 7936 00:21:22.565 } 00:21:22.565 ] 00:21:22.565 } 00:21:22.565 } 00:21:22.565 }' 00:21:22.565 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:22.825 pt2' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.825 [2024-11-26 20:34:16.285778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=af17bcec-0e16-4412-bdd3-30ff0d3c26d2 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z af17bcec-0e16-4412-bdd3-30ff0d3c26d2 ']' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.825 [2024-11-26 20:34:16.321380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.825 [2024-11-26 20:34:16.321413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.825 [2024-11-26 20:34:16.321517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.825 [2024-11-26 20:34:16.321584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.825 [2024-11-26 20:34:16.321599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.825 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.084 [2024-11-26 20:34:16.449222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:23.084 [2024-11-26 20:34:16.451407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:23.084 [2024-11-26 20:34:16.451501] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:23.084 [2024-11-26 20:34:16.451563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:23.084 [2024-11-26 20:34:16.451580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.084 [2024-11-26 20:34:16.451593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:23.084 request: 00:21:23.084 { 00:21:23.084 "name": "raid_bdev1", 00:21:23.084 "raid_level": "raid1", 00:21:23.084 "base_bdevs": [ 00:21:23.084 "malloc1", 00:21:23.084 "malloc2" 00:21:23.084 ], 00:21:23.084 "superblock": false, 00:21:23.084 "method": "bdev_raid_create", 00:21:23.084 "req_id": 1 00:21:23.084 } 00:21:23.084 Got JSON-RPC error response 00:21:23.084 response: 00:21:23.084 { 00:21:23.084 "code": -17, 00:21:23.084 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:23.084 } 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.084 [2024-11-26 20:34:16.493134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:23.084 [2024-11-26 20:34:16.493202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.084 [2024-11-26 20:34:16.493221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:23.084 [2024-11-26 20:34:16.493234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.084 [2024-11-26 20:34:16.495517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.084 [2024-11-26 20:34:16.495556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:23.084 [2024-11-26 20:34:16.495618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:23.084 [2024-11-26 20:34:16.495686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:23.084 pt1 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.084 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.085 "name": "raid_bdev1", 00:21:23.085 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:23.085 "strip_size_kb": 0, 00:21:23.085 "state": "configuring", 00:21:23.085 "raid_level": "raid1", 00:21:23.085 "superblock": true, 00:21:23.085 "num_base_bdevs": 2, 00:21:23.085 "num_base_bdevs_discovered": 1, 00:21:23.085 "num_base_bdevs_operational": 2, 00:21:23.085 "base_bdevs_list": [ 00:21:23.085 { 00:21:23.085 "name": "pt1", 00:21:23.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.085 "is_configured": true, 00:21:23.085 "data_offset": 256, 00:21:23.085 "data_size": 7936 00:21:23.085 }, 00:21:23.085 { 00:21:23.085 "name": null, 00:21:23.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.085 "is_configured": false, 00:21:23.085 "data_offset": 256, 00:21:23.085 "data_size": 7936 00:21:23.085 } 00:21:23.085 ] 00:21:23.085 }' 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.085 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.651 [2024-11-26 20:34:16.948471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.651 [2024-11-26 20:34:16.948555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.651 [2024-11-26 20:34:16.948579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:23.651 [2024-11-26 20:34:16.948590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.651 [2024-11-26 20:34:16.948872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.651 [2024-11-26 20:34:16.948901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.651 [2024-11-26 20:34:16.948972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.651 [2024-11-26 20:34:16.949001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.651 [2024-11-26 20:34:16.949148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:23.651 [2024-11-26 20:34:16.949171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:23.651 [2024-11-26 20:34:16.949287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:23.651 [2024-11-26 20:34:16.949449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:23.651 [2024-11-26 20:34:16.949467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:23.651 [2024-11-26 20:34:16.949586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.651 pt2 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.651 "name": "raid_bdev1", 00:21:23.651 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:23.651 "strip_size_kb": 0, 00:21:23.651 "state": "online", 00:21:23.651 "raid_level": "raid1", 00:21:23.651 "superblock": true, 00:21:23.651 "num_base_bdevs": 2, 00:21:23.651 "num_base_bdevs_discovered": 2, 00:21:23.651 "num_base_bdevs_operational": 2, 00:21:23.651 "base_bdevs_list": [ 00:21:23.651 { 00:21:23.651 "name": "pt1", 00:21:23.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.651 "is_configured": true, 00:21:23.651 "data_offset": 256, 00:21:23.651 "data_size": 7936 00:21:23.651 }, 00:21:23.651 { 00:21:23.651 "name": "pt2", 00:21:23.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.651 "is_configured": true, 00:21:23.651 "data_offset": 256, 00:21:23.651 "data_size": 7936 00:21:23.651 } 00:21:23.651 ] 00:21:23.651 }' 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.651 20:34:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:23.911 [2024-11-26 20:34:17.408081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.911 "name": "raid_bdev1", 00:21:23.911 "aliases": [ 00:21:23.911 "af17bcec-0e16-4412-bdd3-30ff0d3c26d2" 00:21:23.911 ], 00:21:23.911 "product_name": "Raid Volume", 00:21:23.911 "block_size": 4096, 00:21:23.911 "num_blocks": 7936, 00:21:23.911 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:23.911 "md_size": 32, 00:21:23.911 "md_interleave": false, 00:21:23.911 "dif_type": 0, 00:21:23.911 "assigned_rate_limits": { 00:21:23.911 "rw_ios_per_sec": 0, 00:21:23.911 "rw_mbytes_per_sec": 0, 00:21:23.911 "r_mbytes_per_sec": 0, 00:21:23.911 "w_mbytes_per_sec": 0 00:21:23.911 }, 00:21:23.911 "claimed": false, 00:21:23.911 "zoned": false, 00:21:23.911 "supported_io_types": { 00:21:23.911 "read": true, 00:21:23.911 "write": true, 00:21:23.911 "unmap": false, 00:21:23.911 "flush": false, 00:21:23.911 "reset": true, 00:21:23.911 "nvme_admin": false, 00:21:23.911 "nvme_io": false, 00:21:23.911 "nvme_io_md": false, 00:21:23.911 "write_zeroes": true, 00:21:23.911 "zcopy": false, 00:21:23.911 "get_zone_info": false, 00:21:23.911 "zone_management": false, 00:21:23.911 "zone_append": false, 00:21:23.911 "compare": false, 00:21:23.911 "compare_and_write": false, 00:21:23.911 "abort": false, 00:21:23.911 "seek_hole": false, 00:21:23.911 "seek_data": false, 00:21:23.911 "copy": false, 00:21:23.911 "nvme_iov_md": false 00:21:23.911 }, 00:21:23.911 "memory_domains": [ 00:21:23.911 { 00:21:23.911 "dma_device_id": "system", 00:21:23.911 "dma_device_type": 1 00:21:23.911 }, 00:21:23.911 { 00:21:23.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.911 "dma_device_type": 2 00:21:23.911 }, 00:21:23.911 { 00:21:23.911 "dma_device_id": "system", 00:21:23.911 "dma_device_type": 1 00:21:23.911 }, 00:21:23.911 { 00:21:23.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.911 "dma_device_type": 2 00:21:23.911 } 00:21:23.911 ], 00:21:23.911 "driver_specific": { 00:21:23.911 "raid": { 00:21:23.911 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:23.911 "strip_size_kb": 0, 00:21:23.911 "state": "online", 00:21:23.911 "raid_level": "raid1", 00:21:23.911 "superblock": true, 00:21:23.911 "num_base_bdevs": 2, 00:21:23.911 "num_base_bdevs_discovered": 2, 00:21:23.911 "num_base_bdevs_operational": 2, 00:21:23.911 "base_bdevs_list": [ 00:21:23.911 { 00:21:23.911 "name": "pt1", 00:21:23.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.911 "is_configured": true, 00:21:23.911 "data_offset": 256, 00:21:23.911 "data_size": 7936 00:21:23.911 }, 00:21:23.911 { 00:21:23.911 "name": "pt2", 00:21:23.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.911 "is_configured": true, 00:21:23.911 "data_offset": 256, 00:21:23.911 "data_size": 7936 00:21:23.911 } 00:21:23.911 ] 00:21:23.911 } 00:21:23.911 } 00:21:23.911 }' 00:21:23.911 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:24.170 pt2' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:24.170 [2024-11-26 20:34:17.639722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' af17bcec-0e16-4412-bdd3-30ff0d3c26d2 '!=' af17bcec-0e16-4412-bdd3-30ff0d3c26d2 ']' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.170 [2024-11-26 20:34:17.679377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.170 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.429 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.429 "name": "raid_bdev1", 00:21:24.429 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:24.429 "strip_size_kb": 0, 00:21:24.429 "state": "online", 00:21:24.429 "raid_level": "raid1", 00:21:24.429 "superblock": true, 00:21:24.429 "num_base_bdevs": 2, 00:21:24.429 "num_base_bdevs_discovered": 1, 00:21:24.429 "num_base_bdevs_operational": 1, 00:21:24.429 "base_bdevs_list": [ 00:21:24.429 { 00:21:24.429 "name": null, 00:21:24.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.429 "is_configured": false, 00:21:24.429 "data_offset": 0, 00:21:24.429 "data_size": 7936 00:21:24.429 }, 00:21:24.429 { 00:21:24.429 "name": "pt2", 00:21:24.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.429 "is_configured": true, 00:21:24.429 "data_offset": 256, 00:21:24.429 "data_size": 7936 00:21:24.429 } 00:21:24.429 ] 00:21:24.429 }' 00:21:24.429 20:34:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.429 20:34:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.687 [2024-11-26 20:34:18.106567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.687 [2024-11-26 20:34:18.106598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.687 [2024-11-26 20:34:18.106711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.687 [2024-11-26 20:34:18.106764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.687 [2024-11-26 20:34:18.106776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.687 [2024-11-26 20:34:18.182438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:24.687 [2024-11-26 20:34:18.182502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.687 [2024-11-26 20:34:18.182521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:24.687 [2024-11-26 20:34:18.182533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.687 [2024-11-26 20:34:18.184784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.687 [2024-11-26 20:34:18.184822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:24.687 [2024-11-26 20:34:18.184901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:24.687 [2024-11-26 20:34:18.184980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.687 [2024-11-26 20:34:18.185181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:24.687 [2024-11-26 20:34:18.185205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:24.687 [2024-11-26 20:34:18.185318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:24.687 [2024-11-26 20:34:18.185466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:24.687 [2024-11-26 20:34:18.185483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:24.687 [2024-11-26 20:34:18.185616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.687 pt2 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.687 "name": "raid_bdev1", 00:21:24.687 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:24.687 "strip_size_kb": 0, 00:21:24.687 "state": "online", 00:21:24.687 "raid_level": "raid1", 00:21:24.687 "superblock": true, 00:21:24.687 "num_base_bdevs": 2, 00:21:24.687 "num_base_bdevs_discovered": 1, 00:21:24.687 "num_base_bdevs_operational": 1, 00:21:24.687 "base_bdevs_list": [ 00:21:24.687 { 00:21:24.687 "name": null, 00:21:24.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.687 "is_configured": false, 00:21:24.687 "data_offset": 256, 00:21:24.687 "data_size": 7936 00:21:24.687 }, 00:21:24.687 { 00:21:24.687 "name": "pt2", 00:21:24.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.687 "is_configured": true, 00:21:24.687 "data_offset": 256, 00:21:24.687 "data_size": 7936 00:21:24.687 } 00:21:24.687 ] 00:21:24.687 }' 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.687 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.254 [2024-11-26 20:34:18.657643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.254 [2024-11-26 20:34:18.657685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.254 [2024-11-26 20:34:18.657774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.254 [2024-11-26 20:34:18.657845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.254 [2024-11-26 20:34:18.657865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.254 [2024-11-26 20:34:18.705605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:25.254 [2024-11-26 20:34:18.705680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.254 [2024-11-26 20:34:18.705700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:25.254 [2024-11-26 20:34:18.705710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.254 [2024-11-26 20:34:18.707745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.254 [2024-11-26 20:34:18.707782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:25.254 [2024-11-26 20:34:18.707849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:25.254 [2024-11-26 20:34:18.707895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:25.254 [2024-11-26 20:34:18.708061] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:25.254 [2024-11-26 20:34:18.708078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.254 [2024-11-26 20:34:18.708099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:25.254 [2024-11-26 20:34:18.708178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:25.254 [2024-11-26 20:34:18.708272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:25.254 [2024-11-26 20:34:18.708281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:25.254 [2024-11-26 20:34:18.708349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:25.254 [2024-11-26 20:34:18.708460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:25.254 [2024-11-26 20:34:18.708478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:25.254 [2024-11-26 20:34:18.708608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.254 pt1 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.254 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.255 "name": "raid_bdev1", 00:21:25.255 "uuid": "af17bcec-0e16-4412-bdd3-30ff0d3c26d2", 00:21:25.255 "strip_size_kb": 0, 00:21:25.255 "state": "online", 00:21:25.255 "raid_level": "raid1", 00:21:25.255 "superblock": true, 00:21:25.255 "num_base_bdevs": 2, 00:21:25.255 "num_base_bdevs_discovered": 1, 00:21:25.255 "num_base_bdevs_operational": 1, 00:21:25.255 "base_bdevs_list": [ 00:21:25.255 { 00:21:25.255 "name": null, 00:21:25.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.255 "is_configured": false, 00:21:25.255 "data_offset": 256, 00:21:25.255 "data_size": 7936 00:21:25.255 }, 00:21:25.255 { 00:21:25.255 "name": "pt2", 00:21:25.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.255 "is_configured": true, 00:21:25.255 "data_offset": 256, 00:21:25.255 "data_size": 7936 00:21:25.255 } 00:21:25.255 ] 00:21:25.255 }' 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.255 20:34:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.821 [2024-11-26 20:34:19.205046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' af17bcec-0e16-4412-bdd3-30ff0d3c26d2 '!=' af17bcec-0e16-4412-bdd3-30ff0d3c26d2 ']' 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87929 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87929 ']' 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87929 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87929 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87929' 00:21:25.821 killing process with pid 87929 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87929 00:21:25.821 [2024-11-26 20:34:19.274133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.821 [2024-11-26 20:34:19.274266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.821 20:34:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87929 00:21:25.822 [2024-11-26 20:34:19.274335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.822 [2024-11-26 20:34:19.274361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:26.080 [2024-11-26 20:34:19.532719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.451 20:34:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:27.451 00:21:27.451 real 0m6.263s 00:21:27.451 user 0m9.398s 00:21:27.451 sys 0m1.086s 00:21:27.451 20:34:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.451 20:34:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.451 ************************************ 00:21:27.451 END TEST raid_superblock_test_md_separate 00:21:27.451 ************************************ 00:21:27.451 20:34:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:27.452 20:34:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:27.452 20:34:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:27.452 20:34:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.452 20:34:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.452 ************************************ 00:21:27.452 START TEST raid_rebuild_test_sb_md_separate 00:21:27.452 ************************************ 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88252 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88252 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88252 ']' 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.452 20:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.452 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.452 Zero copy mechanism will not be used. 00:21:27.452 [2024-11-26 20:34:20.950096] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:21:27.452 [2024-11-26 20:34:20.950231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88252 ] 00:21:27.711 [2024-11-26 20:34:21.124549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.711 [2024-11-26 20:34:21.257317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.969 [2024-11-26 20:34:21.483903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.969 [2024-11-26 20:34:21.483946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 BaseBdev1_malloc 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 [2024-11-26 20:34:21.878382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.536 [2024-11-26 20:34:21.878458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.536 [2024-11-26 20:34:21.878495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:28.536 [2024-11-26 20:34:21.878508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.536 [2024-11-26 20:34:21.880721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.536 [2024-11-26 20:34:21.880756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.536 BaseBdev1 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 BaseBdev2_malloc 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 [2024-11-26 20:34:21.940234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:28.536 [2024-11-26 20:34:21.940322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.536 [2024-11-26 20:34:21.940349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:28.536 [2024-11-26 20:34:21.940364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.536 [2024-11-26 20:34:21.942554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.536 [2024-11-26 20:34:21.942591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.536 BaseBdev2 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 spare_malloc 00:21:28.536 20:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 spare_delay 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 [2024-11-26 20:34:22.015516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.536 [2024-11-26 20:34:22.015577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.536 [2024-11-26 20:34:22.015599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:28.536 [2024-11-26 20:34:22.015611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.536 [2024-11-26 20:34:22.017876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.536 [2024-11-26 20:34:22.017920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.536 spare 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 [2024-11-26 20:34:22.027541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.536 [2024-11-26 20:34:22.029547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.536 [2024-11-26 20:34:22.029762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:28.536 [2024-11-26 20:34:22.029789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:28.536 [2024-11-26 20:34:22.029881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:28.536 [2024-11-26 20:34:22.030042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:28.536 [2024-11-26 20:34:22.030061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:28.536 [2024-11-26 20:34:22.030192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.536 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.537 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.537 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.537 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.537 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.537 "name": "raid_bdev1", 00:21:28.537 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:28.537 "strip_size_kb": 0, 00:21:28.537 "state": "online", 00:21:28.537 "raid_level": "raid1", 00:21:28.537 "superblock": true, 00:21:28.537 "num_base_bdevs": 2, 00:21:28.537 "num_base_bdevs_discovered": 2, 00:21:28.537 "num_base_bdevs_operational": 2, 00:21:28.537 "base_bdevs_list": [ 00:21:28.537 { 00:21:28.537 "name": "BaseBdev1", 00:21:28.537 "uuid": "a3f07d4c-7ba3-5728-9ce6-0ba18d7ac6f0", 00:21:28.537 "is_configured": true, 00:21:28.537 "data_offset": 256, 00:21:28.537 "data_size": 7936 00:21:28.537 }, 00:21:28.537 { 00:21:28.537 "name": "BaseBdev2", 00:21:28.537 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:28.537 "is_configured": true, 00:21:28.537 "data_offset": 256, 00:21:28.537 "data_size": 7936 00:21:28.537 } 00:21:28.537 ] 00:21:28.537 }' 00:21:28.537 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.537 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 [2024-11-26 20:34:22.495120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.103 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:29.361 [2024-11-26 20:34:22.798376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:29.361 /dev/nbd0 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.361 1+0 records in 00:21:29.361 1+0 records out 00:21:29.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309231 s, 13.2 MB/s 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:29.361 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:29.362 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.362 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.362 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:29.362 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:29.362 20:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:30.296 7936+0 records in 00:21:30.296 7936+0 records out 00:21:30.296 32505856 bytes (33 MB, 31 MiB) copied, 0.747071 s, 43.5 MB/s 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.296 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:30.555 [2024-11-26 20:34:23.867497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.555 [2024-11-26 20:34:23.879603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.555 "name": "raid_bdev1", 00:21:30.555 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:30.555 "strip_size_kb": 0, 00:21:30.555 "state": "online", 00:21:30.555 "raid_level": "raid1", 00:21:30.555 "superblock": true, 00:21:30.555 "num_base_bdevs": 2, 00:21:30.555 "num_base_bdevs_discovered": 1, 00:21:30.555 "num_base_bdevs_operational": 1, 00:21:30.555 "base_bdevs_list": [ 00:21:30.555 { 00:21:30.555 "name": null, 00:21:30.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.555 "is_configured": false, 00:21:30.555 "data_offset": 0, 00:21:30.555 "data_size": 7936 00:21:30.555 }, 00:21:30.555 { 00:21:30.555 "name": "BaseBdev2", 00:21:30.555 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:30.555 "is_configured": true, 00:21:30.555 "data_offset": 256, 00:21:30.555 "data_size": 7936 00:21:30.555 } 00:21:30.555 ] 00:21:30.555 }' 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.555 20:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.813 20:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.813 20:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.813 20:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.813 [2024-11-26 20:34:24.262955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.813 [2024-11-26 20:34:24.280941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:30.813 20:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.813 20:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:30.813 [2024-11-26 20:34:24.283059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.749 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.007 "name": "raid_bdev1", 00:21:32.007 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:32.007 "strip_size_kb": 0, 00:21:32.007 "state": "online", 00:21:32.007 "raid_level": "raid1", 00:21:32.007 "superblock": true, 00:21:32.007 "num_base_bdevs": 2, 00:21:32.007 "num_base_bdevs_discovered": 2, 00:21:32.007 "num_base_bdevs_operational": 2, 00:21:32.007 "process": { 00:21:32.007 "type": "rebuild", 00:21:32.007 "target": "spare", 00:21:32.007 "progress": { 00:21:32.007 "blocks": 2560, 00:21:32.007 "percent": 32 00:21:32.007 } 00:21:32.007 }, 00:21:32.007 "base_bdevs_list": [ 00:21:32.007 { 00:21:32.007 "name": "spare", 00:21:32.007 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:32.007 "is_configured": true, 00:21:32.007 "data_offset": 256, 00:21:32.007 "data_size": 7936 00:21:32.007 }, 00:21:32.007 { 00:21:32.007 "name": "BaseBdev2", 00:21:32.007 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:32.007 "is_configured": true, 00:21:32.007 "data_offset": 256, 00:21:32.007 "data_size": 7936 00:21:32.007 } 00:21:32.007 ] 00:21:32.007 }' 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.007 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.007 [2024-11-26 20:34:25.438612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.007 [2024-11-26 20:34:25.488911] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:32.007 [2024-11-26 20:34:25.489030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.007 [2024-11-26 20:34:25.489048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.008 [2024-11-26 20:34:25.489063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.008 "name": "raid_bdev1", 00:21:32.008 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:32.008 "strip_size_kb": 0, 00:21:32.008 "state": "online", 00:21:32.008 "raid_level": "raid1", 00:21:32.008 "superblock": true, 00:21:32.008 "num_base_bdevs": 2, 00:21:32.008 "num_base_bdevs_discovered": 1, 00:21:32.008 "num_base_bdevs_operational": 1, 00:21:32.008 "base_bdevs_list": [ 00:21:32.008 { 00:21:32.008 "name": null, 00:21:32.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.008 "is_configured": false, 00:21:32.008 "data_offset": 0, 00:21:32.008 "data_size": 7936 00:21:32.008 }, 00:21:32.008 { 00:21:32.008 "name": "BaseBdev2", 00:21:32.008 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:32.008 "is_configured": true, 00:21:32.008 "data_offset": 256, 00:21:32.008 "data_size": 7936 00:21:32.008 } 00:21:32.008 ] 00:21:32.008 }' 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.008 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.574 20:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.574 "name": "raid_bdev1", 00:21:32.574 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:32.574 "strip_size_kb": 0, 00:21:32.574 "state": "online", 00:21:32.574 "raid_level": "raid1", 00:21:32.574 "superblock": true, 00:21:32.574 "num_base_bdevs": 2, 00:21:32.574 "num_base_bdevs_discovered": 1, 00:21:32.574 "num_base_bdevs_operational": 1, 00:21:32.574 "base_bdevs_list": [ 00:21:32.574 { 00:21:32.574 "name": null, 00:21:32.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.574 "is_configured": false, 00:21:32.574 "data_offset": 0, 00:21:32.574 "data_size": 7936 00:21:32.574 }, 00:21:32.574 { 00:21:32.574 "name": "BaseBdev2", 00:21:32.574 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:32.574 "is_configured": true, 00:21:32.574 "data_offset": 256, 00:21:32.574 "data_size": 7936 00:21:32.574 } 00:21:32.574 ] 00:21:32.574 }' 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.574 [2024-11-26 20:34:26.087023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.574 [2024-11-26 20:34:26.104355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.574 20:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:32.574 [2024-11-26 20:34:26.106540] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.950 "name": "raid_bdev1", 00:21:33.950 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:33.950 "strip_size_kb": 0, 00:21:33.950 "state": "online", 00:21:33.950 "raid_level": "raid1", 00:21:33.950 "superblock": true, 00:21:33.950 "num_base_bdevs": 2, 00:21:33.950 "num_base_bdevs_discovered": 2, 00:21:33.950 "num_base_bdevs_operational": 2, 00:21:33.950 "process": { 00:21:33.950 "type": "rebuild", 00:21:33.950 "target": "spare", 00:21:33.950 "progress": { 00:21:33.950 "blocks": 2560, 00:21:33.950 "percent": 32 00:21:33.950 } 00:21:33.950 }, 00:21:33.950 "base_bdevs_list": [ 00:21:33.950 { 00:21:33.950 "name": "spare", 00:21:33.950 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:33.950 "is_configured": true, 00:21:33.950 "data_offset": 256, 00:21:33.950 "data_size": 7936 00:21:33.950 }, 00:21:33.950 { 00:21:33.950 "name": "BaseBdev2", 00:21:33.950 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:33.950 "is_configured": true, 00:21:33.950 "data_offset": 256, 00:21:33.950 "data_size": 7936 00:21:33.950 } 00:21:33.950 ] 00:21:33.950 }' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:33.950 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=740 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.950 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.950 "name": "raid_bdev1", 00:21:33.950 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:33.950 "strip_size_kb": 0, 00:21:33.950 "state": "online", 00:21:33.950 "raid_level": "raid1", 00:21:33.950 "superblock": true, 00:21:33.950 "num_base_bdevs": 2, 00:21:33.950 "num_base_bdevs_discovered": 2, 00:21:33.950 "num_base_bdevs_operational": 2, 00:21:33.950 "process": { 00:21:33.950 "type": "rebuild", 00:21:33.950 "target": "spare", 00:21:33.950 "progress": { 00:21:33.950 "blocks": 2816, 00:21:33.950 "percent": 35 00:21:33.950 } 00:21:33.950 }, 00:21:33.950 "base_bdevs_list": [ 00:21:33.950 { 00:21:33.951 "name": "spare", 00:21:33.951 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:33.951 "is_configured": true, 00:21:33.951 "data_offset": 256, 00:21:33.951 "data_size": 7936 00:21:33.951 }, 00:21:33.951 { 00:21:33.951 "name": "BaseBdev2", 00:21:33.951 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:33.951 "is_configured": true, 00:21:33.951 "data_offset": 256, 00:21:33.951 "data_size": 7936 00:21:33.951 } 00:21:33.951 ] 00:21:33.951 }' 00:21:33.951 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.951 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.951 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.951 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.951 20:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.890 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.149 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.149 "name": "raid_bdev1", 00:21:35.149 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:35.149 "strip_size_kb": 0, 00:21:35.149 "state": "online", 00:21:35.149 "raid_level": "raid1", 00:21:35.149 "superblock": true, 00:21:35.149 "num_base_bdevs": 2, 00:21:35.149 "num_base_bdevs_discovered": 2, 00:21:35.149 "num_base_bdevs_operational": 2, 00:21:35.149 "process": { 00:21:35.149 "type": "rebuild", 00:21:35.149 "target": "spare", 00:21:35.149 "progress": { 00:21:35.149 "blocks": 5632, 00:21:35.149 "percent": 70 00:21:35.149 } 00:21:35.149 }, 00:21:35.149 "base_bdevs_list": [ 00:21:35.149 { 00:21:35.149 "name": "spare", 00:21:35.149 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:35.149 "is_configured": true, 00:21:35.149 "data_offset": 256, 00:21:35.149 "data_size": 7936 00:21:35.149 }, 00:21:35.149 { 00:21:35.149 "name": "BaseBdev2", 00:21:35.149 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:35.149 "is_configured": true, 00:21:35.149 "data_offset": 256, 00:21:35.149 "data_size": 7936 00:21:35.149 } 00:21:35.149 ] 00:21:35.149 }' 00:21:35.149 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.149 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.149 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.149 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.149 20:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:35.717 [2024-11-26 20:34:29.221858] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:35.717 [2024-11-26 20:34:29.221950] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:35.717 [2024-11-26 20:34:29.222080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.285 "name": "raid_bdev1", 00:21:36.285 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:36.285 "strip_size_kb": 0, 00:21:36.285 "state": "online", 00:21:36.285 "raid_level": "raid1", 00:21:36.285 "superblock": true, 00:21:36.285 "num_base_bdevs": 2, 00:21:36.285 "num_base_bdevs_discovered": 2, 00:21:36.285 "num_base_bdevs_operational": 2, 00:21:36.285 "base_bdevs_list": [ 00:21:36.285 { 00:21:36.285 "name": "spare", 00:21:36.285 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:36.285 "is_configured": true, 00:21:36.285 "data_offset": 256, 00:21:36.285 "data_size": 7936 00:21:36.285 }, 00:21:36.285 { 00:21:36.285 "name": "BaseBdev2", 00:21:36.285 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:36.285 "is_configured": true, 00:21:36.285 "data_offset": 256, 00:21:36.285 "data_size": 7936 00:21:36.285 } 00:21:36.285 ] 00:21:36.285 }' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.285 "name": "raid_bdev1", 00:21:36.285 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:36.285 "strip_size_kb": 0, 00:21:36.285 "state": "online", 00:21:36.285 "raid_level": "raid1", 00:21:36.285 "superblock": true, 00:21:36.285 "num_base_bdevs": 2, 00:21:36.285 "num_base_bdevs_discovered": 2, 00:21:36.285 "num_base_bdevs_operational": 2, 00:21:36.285 "base_bdevs_list": [ 00:21:36.285 { 00:21:36.285 "name": "spare", 00:21:36.285 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:36.285 "is_configured": true, 00:21:36.285 "data_offset": 256, 00:21:36.285 "data_size": 7936 00:21:36.285 }, 00:21:36.285 { 00:21:36.285 "name": "BaseBdev2", 00:21:36.285 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:36.285 "is_configured": true, 00:21:36.285 "data_offset": 256, 00:21:36.285 "data_size": 7936 00:21:36.285 } 00:21:36.285 ] 00:21:36.285 }' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:36.285 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.286 "name": "raid_bdev1", 00:21:36.286 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:36.286 "strip_size_kb": 0, 00:21:36.286 "state": "online", 00:21:36.286 "raid_level": "raid1", 00:21:36.286 "superblock": true, 00:21:36.286 "num_base_bdevs": 2, 00:21:36.286 "num_base_bdevs_discovered": 2, 00:21:36.286 "num_base_bdevs_operational": 2, 00:21:36.286 "base_bdevs_list": [ 00:21:36.286 { 00:21:36.286 "name": "spare", 00:21:36.286 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:36.286 "is_configured": true, 00:21:36.286 "data_offset": 256, 00:21:36.286 "data_size": 7936 00:21:36.286 }, 00:21:36.286 { 00:21:36.286 "name": "BaseBdev2", 00:21:36.286 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:36.286 "is_configured": true, 00:21:36.286 "data_offset": 256, 00:21:36.286 "data_size": 7936 00:21:36.286 } 00:21:36.286 ] 00:21:36.286 }' 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.286 20:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.853 [2024-11-26 20:34:30.254533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.853 [2024-11-26 20:34:30.254571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.853 [2024-11-26 20:34:30.254675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.853 [2024-11-26 20:34:30.254755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.853 [2024-11-26 20:34:30.254779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.853 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:37.111 /dev/nbd0 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.111 1+0 records in 00:21:37.111 1+0 records out 00:21:37.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371654 s, 11.0 MB/s 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.111 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:37.370 /dev/nbd1 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.370 1+0 records in 00:21:37.370 1+0 records out 00:21:37.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455746 s, 9.0 MB/s 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.370 20:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:37.628 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:37.629 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:37.629 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:37.629 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.629 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:37.629 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.629 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:37.886 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.887 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.146 [2024-11-26 20:34:31.605900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:38.146 [2024-11-26 20:34:31.605974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.146 [2024-11-26 20:34:31.605999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:38.146 [2024-11-26 20:34:31.606010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.146 [2024-11-26 20:34:31.608282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.146 [2024-11-26 20:34:31.608317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:38.146 [2024-11-26 20:34:31.608397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:38.146 [2024-11-26 20:34:31.608461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:38.146 [2024-11-26 20:34:31.608628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.146 spare 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.146 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:38.147 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.147 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.406 [2024-11-26 20:34:31.708543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:38.406 [2024-11-26 20:34:31.708619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.406 [2024-11-26 20:34:31.708787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:38.406 [2024-11-26 20:34:31.708996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:38.406 [2024-11-26 20:34:31.709014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:38.406 [2024-11-26 20:34:31.709198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.406 "name": "raid_bdev1", 00:21:38.406 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:38.406 "strip_size_kb": 0, 00:21:38.406 "state": "online", 00:21:38.406 "raid_level": "raid1", 00:21:38.406 "superblock": true, 00:21:38.406 "num_base_bdevs": 2, 00:21:38.406 "num_base_bdevs_discovered": 2, 00:21:38.406 "num_base_bdevs_operational": 2, 00:21:38.406 "base_bdevs_list": [ 00:21:38.406 { 00:21:38.406 "name": "spare", 00:21:38.406 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:38.406 "is_configured": true, 00:21:38.406 "data_offset": 256, 00:21:38.406 "data_size": 7936 00:21:38.406 }, 00:21:38.406 { 00:21:38.406 "name": "BaseBdev2", 00:21:38.406 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:38.406 "is_configured": true, 00:21:38.406 "data_offset": 256, 00:21:38.406 "data_size": 7936 00:21:38.406 } 00:21:38.406 ] 00:21:38.406 }' 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.406 20:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.664 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.926 "name": "raid_bdev1", 00:21:38.926 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:38.926 "strip_size_kb": 0, 00:21:38.926 "state": "online", 00:21:38.926 "raid_level": "raid1", 00:21:38.926 "superblock": true, 00:21:38.926 "num_base_bdevs": 2, 00:21:38.926 "num_base_bdevs_discovered": 2, 00:21:38.926 "num_base_bdevs_operational": 2, 00:21:38.926 "base_bdevs_list": [ 00:21:38.926 { 00:21:38.926 "name": "spare", 00:21:38.926 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:38.926 "is_configured": true, 00:21:38.926 "data_offset": 256, 00:21:38.926 "data_size": 7936 00:21:38.926 }, 00:21:38.926 { 00:21:38.926 "name": "BaseBdev2", 00:21:38.926 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:38.926 "is_configured": true, 00:21:38.926 "data_offset": 256, 00:21:38.926 "data_size": 7936 00:21:38.926 } 00:21:38.926 ] 00:21:38.926 }' 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 [2024-11-26 20:34:32.400595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.926 "name": "raid_bdev1", 00:21:38.926 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:38.926 "strip_size_kb": 0, 00:21:38.926 "state": "online", 00:21:38.926 "raid_level": "raid1", 00:21:38.926 "superblock": true, 00:21:38.926 "num_base_bdevs": 2, 00:21:38.926 "num_base_bdevs_discovered": 1, 00:21:38.926 "num_base_bdevs_operational": 1, 00:21:38.926 "base_bdevs_list": [ 00:21:38.926 { 00:21:38.926 "name": null, 00:21:38.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.926 "is_configured": false, 00:21:38.926 "data_offset": 0, 00:21:38.926 "data_size": 7936 00:21:38.926 }, 00:21:38.926 { 00:21:38.926 "name": "BaseBdev2", 00:21:38.926 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:38.926 "is_configured": true, 00:21:38.926 "data_offset": 256, 00:21:38.926 "data_size": 7936 00:21:38.926 } 00:21:38.926 ] 00:21:38.926 }' 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.926 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.495 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:39.495 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.495 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.495 [2024-11-26 20:34:32.855856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:39.495 [2024-11-26 20:34:32.856090] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:39.495 [2024-11-26 20:34:32.856109] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:39.495 [2024-11-26 20:34:32.856144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:39.495 [2024-11-26 20:34:32.873062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:39.495 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.495 20:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:39.495 [2024-11-26 20:34:32.875185] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:40.431 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:40.432 "name": "raid_bdev1", 00:21:40.432 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:40.432 "strip_size_kb": 0, 00:21:40.432 "state": "online", 00:21:40.432 "raid_level": "raid1", 00:21:40.432 "superblock": true, 00:21:40.432 "num_base_bdevs": 2, 00:21:40.432 "num_base_bdevs_discovered": 2, 00:21:40.432 "num_base_bdevs_operational": 2, 00:21:40.432 "process": { 00:21:40.432 "type": "rebuild", 00:21:40.432 "target": "spare", 00:21:40.432 "progress": { 00:21:40.432 "blocks": 2560, 00:21:40.432 "percent": 32 00:21:40.432 } 00:21:40.432 }, 00:21:40.432 "base_bdevs_list": [ 00:21:40.432 { 00:21:40.432 "name": "spare", 00:21:40.432 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:40.432 "is_configured": true, 00:21:40.432 "data_offset": 256, 00:21:40.432 "data_size": 7936 00:21:40.432 }, 00:21:40.432 { 00:21:40.432 "name": "BaseBdev2", 00:21:40.432 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:40.432 "is_configured": true, 00:21:40.432 "data_offset": 256, 00:21:40.432 "data_size": 7936 00:21:40.432 } 00:21:40.432 ] 00:21:40.432 }' 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:40.432 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:40.690 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.691 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:40.691 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.691 20:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.691 [2024-11-26 20:34:34.006909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:40.691 [2024-11-26 20:34:34.081075] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:40.691 [2024-11-26 20:34:34.081297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.691 [2024-11-26 20:34:34.081320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:40.691 [2024-11-26 20:34:34.081349] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.691 "name": "raid_bdev1", 00:21:40.691 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:40.691 "strip_size_kb": 0, 00:21:40.691 "state": "online", 00:21:40.691 "raid_level": "raid1", 00:21:40.691 "superblock": true, 00:21:40.691 "num_base_bdevs": 2, 00:21:40.691 "num_base_bdevs_discovered": 1, 00:21:40.691 "num_base_bdevs_operational": 1, 00:21:40.691 "base_bdevs_list": [ 00:21:40.691 { 00:21:40.691 "name": null, 00:21:40.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.691 "is_configured": false, 00:21:40.691 "data_offset": 0, 00:21:40.691 "data_size": 7936 00:21:40.691 }, 00:21:40.691 { 00:21:40.691 "name": "BaseBdev2", 00:21:40.691 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:40.691 "is_configured": true, 00:21:40.691 "data_offset": 256, 00:21:40.691 "data_size": 7936 00:21:40.691 } 00:21:40.691 ] 00:21:40.691 }' 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.691 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.257 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:41.257 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.257 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.257 [2024-11-26 20:34:34.526657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:41.257 [2024-11-26 20:34:34.526822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.257 [2024-11-26 20:34:34.526882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:41.257 [2024-11-26 20:34:34.526921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.257 [2024-11-26 20:34:34.527266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.257 [2024-11-26 20:34:34.527330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:41.257 [2024-11-26 20:34:34.527427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:41.257 [2024-11-26 20:34:34.527473] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:41.257 [2024-11-26 20:34:34.527522] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:41.257 [2024-11-26 20:34:34.527570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:41.257 [2024-11-26 20:34:34.544769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:41.257 spare 00:21:41.257 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.257 20:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:41.257 [2024-11-26 20:34:34.546929] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.191 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.191 "name": "raid_bdev1", 00:21:42.191 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:42.191 "strip_size_kb": 0, 00:21:42.191 "state": "online", 00:21:42.191 "raid_level": "raid1", 00:21:42.191 "superblock": true, 00:21:42.191 "num_base_bdevs": 2, 00:21:42.191 "num_base_bdevs_discovered": 2, 00:21:42.191 "num_base_bdevs_operational": 2, 00:21:42.191 "process": { 00:21:42.191 "type": "rebuild", 00:21:42.191 "target": "spare", 00:21:42.191 "progress": { 00:21:42.191 "blocks": 2560, 00:21:42.191 "percent": 32 00:21:42.191 } 00:21:42.191 }, 00:21:42.191 "base_bdevs_list": [ 00:21:42.191 { 00:21:42.191 "name": "spare", 00:21:42.191 "uuid": "461de93a-923f-5bf2-9c03-82a0c6dc5a7b", 00:21:42.191 "is_configured": true, 00:21:42.191 "data_offset": 256, 00:21:42.191 "data_size": 7936 00:21:42.191 }, 00:21:42.191 { 00:21:42.191 "name": "BaseBdev2", 00:21:42.192 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:42.192 "is_configured": true, 00:21:42.192 "data_offset": 256, 00:21:42.192 "data_size": 7936 00:21:42.192 } 00:21:42.192 ] 00:21:42.192 }' 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.192 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.192 [2024-11-26 20:34:35.706699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:42.450 [2024-11-26 20:34:35.752898] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:42.450 [2024-11-26 20:34:35.752969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.451 [2024-11-26 20:34:35.752991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:42.451 [2024-11-26 20:34:35.752999] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.451 "name": "raid_bdev1", 00:21:42.451 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:42.451 "strip_size_kb": 0, 00:21:42.451 "state": "online", 00:21:42.451 "raid_level": "raid1", 00:21:42.451 "superblock": true, 00:21:42.451 "num_base_bdevs": 2, 00:21:42.451 "num_base_bdevs_discovered": 1, 00:21:42.451 "num_base_bdevs_operational": 1, 00:21:42.451 "base_bdevs_list": [ 00:21:42.451 { 00:21:42.451 "name": null, 00:21:42.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.451 "is_configured": false, 00:21:42.451 "data_offset": 0, 00:21:42.451 "data_size": 7936 00:21:42.451 }, 00:21:42.451 { 00:21:42.451 "name": "BaseBdev2", 00:21:42.451 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:42.451 "is_configured": true, 00:21:42.451 "data_offset": 256, 00:21:42.451 "data_size": 7936 00:21:42.451 } 00:21:42.451 ] 00:21:42.451 }' 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.451 20:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.709 "name": "raid_bdev1", 00:21:42.709 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:42.709 "strip_size_kb": 0, 00:21:42.709 "state": "online", 00:21:42.709 "raid_level": "raid1", 00:21:42.709 "superblock": true, 00:21:42.709 "num_base_bdevs": 2, 00:21:42.709 "num_base_bdevs_discovered": 1, 00:21:42.709 "num_base_bdevs_operational": 1, 00:21:42.709 "base_bdevs_list": [ 00:21:42.709 { 00:21:42.709 "name": null, 00:21:42.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.709 "is_configured": false, 00:21:42.709 "data_offset": 0, 00:21:42.709 "data_size": 7936 00:21:42.709 }, 00:21:42.709 { 00:21:42.709 "name": "BaseBdev2", 00:21:42.709 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:42.709 "is_configured": true, 00:21:42.709 "data_offset": 256, 00:21:42.709 "data_size": 7936 00:21:42.709 } 00:21:42.709 ] 00:21:42.709 }' 00:21:42.709 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:42.967 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.968 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.968 [2024-11-26 20:34:36.372195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:42.968 [2024-11-26 20:34:36.372334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.968 [2024-11-26 20:34:36.372367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:42.968 [2024-11-26 20:34:36.372379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.968 [2024-11-26 20:34:36.372643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.968 [2024-11-26 20:34:36.372657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:42.968 [2024-11-26 20:34:36.372717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:42.968 [2024-11-26 20:34:36.372731] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:42.968 [2024-11-26 20:34:36.372742] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:42.968 [2024-11-26 20:34:36.372753] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:42.968 BaseBdev1 00:21:42.968 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.968 20:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.936 "name": "raid_bdev1", 00:21:43.936 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:43.936 "strip_size_kb": 0, 00:21:43.936 "state": "online", 00:21:43.936 "raid_level": "raid1", 00:21:43.936 "superblock": true, 00:21:43.936 "num_base_bdevs": 2, 00:21:43.936 "num_base_bdevs_discovered": 1, 00:21:43.936 "num_base_bdevs_operational": 1, 00:21:43.936 "base_bdevs_list": [ 00:21:43.936 { 00:21:43.936 "name": null, 00:21:43.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.936 "is_configured": false, 00:21:43.936 "data_offset": 0, 00:21:43.936 "data_size": 7936 00:21:43.936 }, 00:21:43.936 { 00:21:43.936 "name": "BaseBdev2", 00:21:43.936 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:43.936 "is_configured": true, 00:21:43.936 "data_offset": 256, 00:21:43.936 "data_size": 7936 00:21:43.936 } 00:21:43.936 ] 00:21:43.936 }' 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.936 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.505 "name": "raid_bdev1", 00:21:44.505 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:44.505 "strip_size_kb": 0, 00:21:44.505 "state": "online", 00:21:44.505 "raid_level": "raid1", 00:21:44.505 "superblock": true, 00:21:44.505 "num_base_bdevs": 2, 00:21:44.505 "num_base_bdevs_discovered": 1, 00:21:44.505 "num_base_bdevs_operational": 1, 00:21:44.505 "base_bdevs_list": [ 00:21:44.505 { 00:21:44.505 "name": null, 00:21:44.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.505 "is_configured": false, 00:21:44.505 "data_offset": 0, 00:21:44.505 "data_size": 7936 00:21:44.505 }, 00:21:44.505 { 00:21:44.505 "name": "BaseBdev2", 00:21:44.505 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:44.505 "is_configured": true, 00:21:44.505 "data_offset": 256, 00:21:44.505 "data_size": 7936 00:21:44.505 } 00:21:44.505 ] 00:21:44.505 }' 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.505 [2024-11-26 20:34:37.937710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:44.505 [2024-11-26 20:34:37.937954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:44.505 [2024-11-26 20:34:37.938026] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:44.505 request: 00:21:44.505 { 00:21:44.505 "base_bdev": "BaseBdev1", 00:21:44.505 "raid_bdev": "raid_bdev1", 00:21:44.505 "method": "bdev_raid_add_base_bdev", 00:21:44.505 "req_id": 1 00:21:44.505 } 00:21:44.505 Got JSON-RPC error response 00:21:44.505 response: 00:21:44.505 { 00:21:44.505 "code": -22, 00:21:44.505 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:44.505 } 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.505 20:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 20:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.697 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.697 "name": "raid_bdev1", 00:21:45.697 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:45.697 "strip_size_kb": 0, 00:21:45.697 "state": "online", 00:21:45.697 "raid_level": "raid1", 00:21:45.697 "superblock": true, 00:21:45.697 "num_base_bdevs": 2, 00:21:45.697 "num_base_bdevs_discovered": 1, 00:21:45.697 "num_base_bdevs_operational": 1, 00:21:45.697 "base_bdevs_list": [ 00:21:45.697 { 00:21:45.697 "name": null, 00:21:45.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.697 "is_configured": false, 00:21:45.697 "data_offset": 0, 00:21:45.697 "data_size": 7936 00:21:45.697 }, 00:21:45.697 { 00:21:45.697 "name": "BaseBdev2", 00:21:45.697 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:45.697 "is_configured": true, 00:21:45.697 "data_offset": 256, 00:21:45.697 "data_size": 7936 00:21:45.697 } 00:21:45.697 ] 00:21:45.697 }' 00:21:45.697 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.697 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.956 "name": "raid_bdev1", 00:21:45.956 "uuid": "8733d3f5-75c5-49cb-9cde-add73ac37766", 00:21:45.956 "strip_size_kb": 0, 00:21:45.956 "state": "online", 00:21:45.956 "raid_level": "raid1", 00:21:45.956 "superblock": true, 00:21:45.956 "num_base_bdevs": 2, 00:21:45.956 "num_base_bdevs_discovered": 1, 00:21:45.956 "num_base_bdevs_operational": 1, 00:21:45.956 "base_bdevs_list": [ 00:21:45.956 { 00:21:45.956 "name": null, 00:21:45.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.956 "is_configured": false, 00:21:45.956 "data_offset": 0, 00:21:45.956 "data_size": 7936 00:21:45.956 }, 00:21:45.956 { 00:21:45.956 "name": "BaseBdev2", 00:21:45.956 "uuid": "4fa43e13-2a25-5ab9-878c-727cc66af3d9", 00:21:45.956 "is_configured": true, 00:21:45.956 "data_offset": 256, 00:21:45.956 "data_size": 7936 00:21:45.956 } 00:21:45.956 ] 00:21:45.956 }' 00:21:45.956 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88252 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88252 ']' 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88252 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.237 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88252 00:21:46.238 killing process with pid 88252 00:21:46.238 Received shutdown signal, test time was about 60.000000 seconds 00:21:46.238 00:21:46.238 Latency(us) 00:21:46.238 [2024-11-26T20:34:39.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.238 [2024-11-26T20:34:39.793Z] =================================================================================================================== 00:21:46.238 [2024-11-26T20:34:39.793Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.238 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.238 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.238 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88252' 00:21:46.238 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88252 00:21:46.238 [2024-11-26 20:34:39.592309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.238 20:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88252 00:21:46.238 [2024-11-26 20:34:39.592455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.238 [2024-11-26 20:34:39.592512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.238 [2024-11-26 20:34:39.592525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:46.496 [2024-11-26 20:34:39.975037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.877 20:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:47.878 ************************************ 00:21:47.878 END TEST raid_rebuild_test_sb_md_separate 00:21:47.878 ************************************ 00:21:47.878 00:21:47.878 real 0m20.475s 00:21:47.878 user 0m26.692s 00:21:47.878 sys 0m2.600s 00:21:47.878 20:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.878 20:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 20:34:41 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:47.878 20:34:41 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:47.878 20:34:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:47.878 20:34:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.878 20:34:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 ************************************ 00:21:47.878 START TEST raid_state_function_test_sb_md_interleaved 00:21:47.878 ************************************ 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:47.878 Process raid pid: 88948 00:21:47.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88948 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88948' 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88948 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88948 ']' 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.878 20:34:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.136 [2024-11-26 20:34:41.481019] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:21:48.136 [2024-11-26 20:34:41.481314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.136 [2024-11-26 20:34:41.657644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.395 [2024-11-26 20:34:41.791592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.653 [2024-11-26 20:34:42.015121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.653 [2024-11-26 20:34:42.015223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.915 [2024-11-26 20:34:42.385450] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:48.915 [2024-11-26 20:34:42.385570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:48.915 [2024-11-26 20:34:42.385612] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.915 [2024-11-26 20:34:42.385649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.915 "name": "Existed_Raid", 00:21:48.915 "uuid": "3551e0dc-6649-4e4b-a51e-78e23211a647", 00:21:48.915 "strip_size_kb": 0, 00:21:48.915 "state": "configuring", 00:21:48.915 "raid_level": "raid1", 00:21:48.915 "superblock": true, 00:21:48.915 "num_base_bdevs": 2, 00:21:48.915 "num_base_bdevs_discovered": 0, 00:21:48.915 "num_base_bdevs_operational": 2, 00:21:48.915 "base_bdevs_list": [ 00:21:48.915 { 00:21:48.915 "name": "BaseBdev1", 00:21:48.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.915 "is_configured": false, 00:21:48.915 "data_offset": 0, 00:21:48.915 "data_size": 0 00:21:48.915 }, 00:21:48.915 { 00:21:48.915 "name": "BaseBdev2", 00:21:48.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.915 "is_configured": false, 00:21:48.915 "data_offset": 0, 00:21:48.915 "data_size": 0 00:21:48.915 } 00:21:48.915 ] 00:21:48.915 }' 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.915 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.488 [2024-11-26 20:34:42.848675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:49.488 [2024-11-26 20:34:42.848767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.488 [2024-11-26 20:34:42.856653] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:49.488 [2024-11-26 20:34:42.856739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:49.488 [2024-11-26 20:34:42.856775] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.488 [2024-11-26 20:34:42.856812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.488 [2024-11-26 20:34:42.907176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.488 BaseBdev1 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.488 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.488 [ 00:21:49.488 { 00:21:49.488 "name": "BaseBdev1", 00:21:49.488 "aliases": [ 00:21:49.488 "0aa1a901-5e06-451a-a35d-c34a55bb876f" 00:21:49.488 ], 00:21:49.488 "product_name": "Malloc disk", 00:21:49.488 "block_size": 4128, 00:21:49.488 "num_blocks": 8192, 00:21:49.488 "uuid": "0aa1a901-5e06-451a-a35d-c34a55bb876f", 00:21:49.488 "md_size": 32, 00:21:49.488 "md_interleave": true, 00:21:49.488 "dif_type": 0, 00:21:49.488 "assigned_rate_limits": { 00:21:49.488 "rw_ios_per_sec": 0, 00:21:49.488 "rw_mbytes_per_sec": 0, 00:21:49.488 "r_mbytes_per_sec": 0, 00:21:49.488 "w_mbytes_per_sec": 0 00:21:49.488 }, 00:21:49.488 "claimed": true, 00:21:49.488 "claim_type": "exclusive_write", 00:21:49.488 "zoned": false, 00:21:49.488 "supported_io_types": { 00:21:49.488 "read": true, 00:21:49.488 "write": true, 00:21:49.488 "unmap": true, 00:21:49.488 "flush": true, 00:21:49.488 "reset": true, 00:21:49.488 "nvme_admin": false, 00:21:49.488 "nvme_io": false, 00:21:49.488 "nvme_io_md": false, 00:21:49.488 "write_zeroes": true, 00:21:49.488 "zcopy": true, 00:21:49.488 "get_zone_info": false, 00:21:49.488 "zone_management": false, 00:21:49.488 "zone_append": false, 00:21:49.488 "compare": false, 00:21:49.488 "compare_and_write": false, 00:21:49.488 "abort": true, 00:21:49.488 "seek_hole": false, 00:21:49.488 "seek_data": false, 00:21:49.488 "copy": true, 00:21:49.488 "nvme_iov_md": false 00:21:49.488 }, 00:21:49.488 "memory_domains": [ 00:21:49.488 { 00:21:49.488 "dma_device_id": "system", 00:21:49.488 "dma_device_type": 1 00:21:49.488 }, 00:21:49.488 { 00:21:49.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.489 "dma_device_type": 2 00:21:49.489 } 00:21:49.489 ], 00:21:49.489 "driver_specific": {} 00:21:49.489 } 00:21:49.489 ] 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.489 "name": "Existed_Raid", 00:21:49.489 "uuid": "fd3f7c0e-724d-48eb-87e0-444df0913e74", 00:21:49.489 "strip_size_kb": 0, 00:21:49.489 "state": "configuring", 00:21:49.489 "raid_level": "raid1", 00:21:49.489 "superblock": true, 00:21:49.489 "num_base_bdevs": 2, 00:21:49.489 "num_base_bdevs_discovered": 1, 00:21:49.489 "num_base_bdevs_operational": 2, 00:21:49.489 "base_bdevs_list": [ 00:21:49.489 { 00:21:49.489 "name": "BaseBdev1", 00:21:49.489 "uuid": "0aa1a901-5e06-451a-a35d-c34a55bb876f", 00:21:49.489 "is_configured": true, 00:21:49.489 "data_offset": 256, 00:21:49.489 "data_size": 7936 00:21:49.489 }, 00:21:49.489 { 00:21:49.489 "name": "BaseBdev2", 00:21:49.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.489 "is_configured": false, 00:21:49.489 "data_offset": 0, 00:21:49.489 "data_size": 0 00:21:49.489 } 00:21:49.489 ] 00:21:49.489 }' 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.489 20:34:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.055 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:50.055 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.055 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.055 [2024-11-26 20:34:43.390444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.055 [2024-11-26 20:34:43.390554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:50.055 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.055 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.056 [2024-11-26 20:34:43.402478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.056 [2024-11-26 20:34:43.404674] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.056 [2024-11-26 20:34:43.404782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.056 "name": "Existed_Raid", 00:21:50.056 "uuid": "aee020d3-33ea-4856-9de9-8c4591e25b6b", 00:21:50.056 "strip_size_kb": 0, 00:21:50.056 "state": "configuring", 00:21:50.056 "raid_level": "raid1", 00:21:50.056 "superblock": true, 00:21:50.056 "num_base_bdevs": 2, 00:21:50.056 "num_base_bdevs_discovered": 1, 00:21:50.056 "num_base_bdevs_operational": 2, 00:21:50.056 "base_bdevs_list": [ 00:21:50.056 { 00:21:50.056 "name": "BaseBdev1", 00:21:50.056 "uuid": "0aa1a901-5e06-451a-a35d-c34a55bb876f", 00:21:50.056 "is_configured": true, 00:21:50.056 "data_offset": 256, 00:21:50.056 "data_size": 7936 00:21:50.056 }, 00:21:50.056 { 00:21:50.056 "name": "BaseBdev2", 00:21:50.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.056 "is_configured": false, 00:21:50.056 "data_offset": 0, 00:21:50.056 "data_size": 0 00:21:50.056 } 00:21:50.056 ] 00:21:50.056 }' 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.056 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.314 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:50.314 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.314 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.573 [2024-11-26 20:34:43.902119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.573 [2024-11-26 20:34:43.902390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:50.573 [2024-11-26 20:34:43.902408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:50.573 [2024-11-26 20:34:43.902502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:50.573 [2024-11-26 20:34:43.902582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:50.573 [2024-11-26 20:34:43.902595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:50.573 [2024-11-26 20:34:43.902667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.573 BaseBdev2 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.573 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.573 [ 00:21:50.573 { 00:21:50.573 "name": "BaseBdev2", 00:21:50.573 "aliases": [ 00:21:50.573 "723c341b-1032-47c9-921d-105f2f5664f5" 00:21:50.573 ], 00:21:50.574 "product_name": "Malloc disk", 00:21:50.574 "block_size": 4128, 00:21:50.574 "num_blocks": 8192, 00:21:50.574 "uuid": "723c341b-1032-47c9-921d-105f2f5664f5", 00:21:50.574 "md_size": 32, 00:21:50.574 "md_interleave": true, 00:21:50.574 "dif_type": 0, 00:21:50.574 "assigned_rate_limits": { 00:21:50.574 "rw_ios_per_sec": 0, 00:21:50.574 "rw_mbytes_per_sec": 0, 00:21:50.574 "r_mbytes_per_sec": 0, 00:21:50.574 "w_mbytes_per_sec": 0 00:21:50.574 }, 00:21:50.574 "claimed": true, 00:21:50.574 "claim_type": "exclusive_write", 00:21:50.574 "zoned": false, 00:21:50.574 "supported_io_types": { 00:21:50.574 "read": true, 00:21:50.574 "write": true, 00:21:50.574 "unmap": true, 00:21:50.574 "flush": true, 00:21:50.574 "reset": true, 00:21:50.574 "nvme_admin": false, 00:21:50.574 "nvme_io": false, 00:21:50.574 "nvme_io_md": false, 00:21:50.574 "write_zeroes": true, 00:21:50.574 "zcopy": true, 00:21:50.574 "get_zone_info": false, 00:21:50.574 "zone_management": false, 00:21:50.574 "zone_append": false, 00:21:50.574 "compare": false, 00:21:50.574 "compare_and_write": false, 00:21:50.574 "abort": true, 00:21:50.574 "seek_hole": false, 00:21:50.574 "seek_data": false, 00:21:50.574 "copy": true, 00:21:50.574 "nvme_iov_md": false 00:21:50.574 }, 00:21:50.574 "memory_domains": [ 00:21:50.574 { 00:21:50.574 "dma_device_id": "system", 00:21:50.574 "dma_device_type": 1 00:21:50.574 }, 00:21:50.574 { 00:21:50.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.574 "dma_device_type": 2 00:21:50.574 } 00:21:50.574 ], 00:21:50.574 "driver_specific": {} 00:21:50.574 } 00:21:50.574 ] 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.574 "name": "Existed_Raid", 00:21:50.574 "uuid": "aee020d3-33ea-4856-9de9-8c4591e25b6b", 00:21:50.574 "strip_size_kb": 0, 00:21:50.574 "state": "online", 00:21:50.574 "raid_level": "raid1", 00:21:50.574 "superblock": true, 00:21:50.574 "num_base_bdevs": 2, 00:21:50.574 "num_base_bdevs_discovered": 2, 00:21:50.574 "num_base_bdevs_operational": 2, 00:21:50.574 "base_bdevs_list": [ 00:21:50.574 { 00:21:50.574 "name": "BaseBdev1", 00:21:50.574 "uuid": "0aa1a901-5e06-451a-a35d-c34a55bb876f", 00:21:50.574 "is_configured": true, 00:21:50.574 "data_offset": 256, 00:21:50.574 "data_size": 7936 00:21:50.574 }, 00:21:50.574 { 00:21:50.574 "name": "BaseBdev2", 00:21:50.574 "uuid": "723c341b-1032-47c9-921d-105f2f5664f5", 00:21:50.574 "is_configured": true, 00:21:50.574 "data_offset": 256, 00:21:50.574 "data_size": 7936 00:21:50.574 } 00:21:50.574 ] 00:21:50.574 }' 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.574 20:34:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:50.833 [2024-11-26 20:34:44.365790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.833 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:51.092 "name": "Existed_Raid", 00:21:51.092 "aliases": [ 00:21:51.092 "aee020d3-33ea-4856-9de9-8c4591e25b6b" 00:21:51.092 ], 00:21:51.092 "product_name": "Raid Volume", 00:21:51.092 "block_size": 4128, 00:21:51.092 "num_blocks": 7936, 00:21:51.092 "uuid": "aee020d3-33ea-4856-9de9-8c4591e25b6b", 00:21:51.092 "md_size": 32, 00:21:51.092 "md_interleave": true, 00:21:51.092 "dif_type": 0, 00:21:51.092 "assigned_rate_limits": { 00:21:51.092 "rw_ios_per_sec": 0, 00:21:51.092 "rw_mbytes_per_sec": 0, 00:21:51.092 "r_mbytes_per_sec": 0, 00:21:51.092 "w_mbytes_per_sec": 0 00:21:51.092 }, 00:21:51.092 "claimed": false, 00:21:51.092 "zoned": false, 00:21:51.092 "supported_io_types": { 00:21:51.092 "read": true, 00:21:51.092 "write": true, 00:21:51.092 "unmap": false, 00:21:51.092 "flush": false, 00:21:51.092 "reset": true, 00:21:51.092 "nvme_admin": false, 00:21:51.092 "nvme_io": false, 00:21:51.092 "nvme_io_md": false, 00:21:51.092 "write_zeroes": true, 00:21:51.092 "zcopy": false, 00:21:51.092 "get_zone_info": false, 00:21:51.092 "zone_management": false, 00:21:51.092 "zone_append": false, 00:21:51.092 "compare": false, 00:21:51.092 "compare_and_write": false, 00:21:51.092 "abort": false, 00:21:51.092 "seek_hole": false, 00:21:51.092 "seek_data": false, 00:21:51.092 "copy": false, 00:21:51.092 "nvme_iov_md": false 00:21:51.092 }, 00:21:51.092 "memory_domains": [ 00:21:51.092 { 00:21:51.092 "dma_device_id": "system", 00:21:51.092 "dma_device_type": 1 00:21:51.092 }, 00:21:51.092 { 00:21:51.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.092 "dma_device_type": 2 00:21:51.092 }, 00:21:51.092 { 00:21:51.092 "dma_device_id": "system", 00:21:51.092 "dma_device_type": 1 00:21:51.092 }, 00:21:51.092 { 00:21:51.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.092 "dma_device_type": 2 00:21:51.092 } 00:21:51.092 ], 00:21:51.092 "driver_specific": { 00:21:51.092 "raid": { 00:21:51.092 "uuid": "aee020d3-33ea-4856-9de9-8c4591e25b6b", 00:21:51.092 "strip_size_kb": 0, 00:21:51.092 "state": "online", 00:21:51.092 "raid_level": "raid1", 00:21:51.092 "superblock": true, 00:21:51.092 "num_base_bdevs": 2, 00:21:51.092 "num_base_bdevs_discovered": 2, 00:21:51.092 "num_base_bdevs_operational": 2, 00:21:51.092 "base_bdevs_list": [ 00:21:51.092 { 00:21:51.092 "name": "BaseBdev1", 00:21:51.092 "uuid": "0aa1a901-5e06-451a-a35d-c34a55bb876f", 00:21:51.092 "is_configured": true, 00:21:51.092 "data_offset": 256, 00:21:51.092 "data_size": 7936 00:21:51.092 }, 00:21:51.092 { 00:21:51.092 "name": "BaseBdev2", 00:21:51.092 "uuid": "723c341b-1032-47c9-921d-105f2f5664f5", 00:21:51.092 "is_configured": true, 00:21:51.092 "data_offset": 256, 00:21:51.092 "data_size": 7936 00:21:51.092 } 00:21:51.092 ] 00:21:51.092 } 00:21:51.092 } 00:21:51.092 }' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:51.092 BaseBdev2' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.092 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.092 [2024-11-26 20:34:44.549229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.351 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.352 "name": "Existed_Raid", 00:21:51.352 "uuid": "aee020d3-33ea-4856-9de9-8c4591e25b6b", 00:21:51.352 "strip_size_kb": 0, 00:21:51.352 "state": "online", 00:21:51.352 "raid_level": "raid1", 00:21:51.352 "superblock": true, 00:21:51.352 "num_base_bdevs": 2, 00:21:51.352 "num_base_bdevs_discovered": 1, 00:21:51.352 "num_base_bdevs_operational": 1, 00:21:51.352 "base_bdevs_list": [ 00:21:51.352 { 00:21:51.352 "name": null, 00:21:51.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.352 "is_configured": false, 00:21:51.352 "data_offset": 0, 00:21:51.352 "data_size": 7936 00:21:51.352 }, 00:21:51.352 { 00:21:51.352 "name": "BaseBdev2", 00:21:51.352 "uuid": "723c341b-1032-47c9-921d-105f2f5664f5", 00:21:51.352 "is_configured": true, 00:21:51.352 "data_offset": 256, 00:21:51.352 "data_size": 7936 00:21:51.352 } 00:21:51.352 ] 00:21:51.352 }' 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.352 20:34:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.609 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.868 [2024-11-26 20:34:45.164832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:51.868 [2024-11-26 20:34:45.164966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.868 [2024-11-26 20:34:45.282655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.868 [2024-11-26 20:34:45.282710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.868 [2024-11-26 20:34:45.282724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88948 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88948 ']' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88948 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88948 00:21:51.868 killing process with pid 88948 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88948' 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88948 00:21:51.868 [2024-11-26 20:34:45.382986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:51.868 20:34:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88948 00:21:51.868 [2024-11-26 20:34:45.403155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:53.242 ************************************ 00:21:53.242 END TEST raid_state_function_test_sb_md_interleaved 00:21:53.242 ************************************ 00:21:53.242 20:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:53.242 00:21:53.242 real 0m5.389s 00:21:53.242 user 0m7.702s 00:21:53.242 sys 0m0.763s 00:21:53.242 20:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.242 20:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.500 20:34:46 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:53.500 20:34:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:53.500 20:34:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.500 20:34:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.500 ************************************ 00:21:53.500 START TEST raid_superblock_test_md_interleaved 00:21:53.500 ************************************ 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89200 00:21:53.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89200 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89200 ']' 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.500 20:34:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.501 [2024-11-26 20:34:46.914560] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:21:53.501 [2024-11-26 20:34:46.914698] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89200 ] 00:21:53.760 [2024-11-26 20:34:47.080634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.760 [2024-11-26 20:34:47.208892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.018 [2024-11-26 20:34:47.448895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.019 [2024-11-26 20:34:47.448969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.278 malloc1 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.278 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.538 [2024-11-26 20:34:47.834098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:54.538 [2024-11-26 20:34:47.834181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.538 [2024-11-26 20:34:47.834217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:54.538 [2024-11-26 20:34:47.834233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.538 [2024-11-26 20:34:47.836983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.538 [2024-11-26 20:34:47.837038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:54.538 pt1 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.538 malloc2 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.538 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.538 [2024-11-26 20:34:47.888399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:54.538 [2024-11-26 20:34:47.888468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.539 [2024-11-26 20:34:47.888494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:54.539 [2024-11-26 20:34:47.888505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.539 [2024-11-26 20:34:47.890722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.539 [2024-11-26 20:34:47.890763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:54.539 pt2 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.539 [2024-11-26 20:34:47.896449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:54.539 [2024-11-26 20:34:47.898582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:54.539 [2024-11-26 20:34:47.898819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:54.539 [2024-11-26 20:34:47.898836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:54.539 [2024-11-26 20:34:47.898927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:54.539 [2024-11-26 20:34:47.899010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:54.539 [2024-11-26 20:34:47.899025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:54.539 [2024-11-26 20:34:47.899110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.539 "name": "raid_bdev1", 00:21:54.539 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:54.539 "strip_size_kb": 0, 00:21:54.539 "state": "online", 00:21:54.539 "raid_level": "raid1", 00:21:54.539 "superblock": true, 00:21:54.539 "num_base_bdevs": 2, 00:21:54.539 "num_base_bdevs_discovered": 2, 00:21:54.539 "num_base_bdevs_operational": 2, 00:21:54.539 "base_bdevs_list": [ 00:21:54.539 { 00:21:54.539 "name": "pt1", 00:21:54.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:54.539 "is_configured": true, 00:21:54.539 "data_offset": 256, 00:21:54.539 "data_size": 7936 00:21:54.539 }, 00:21:54.539 { 00:21:54.539 "name": "pt2", 00:21:54.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.539 "is_configured": true, 00:21:54.539 "data_offset": 256, 00:21:54.539 "data_size": 7936 00:21:54.539 } 00:21:54.539 ] 00:21:54.539 }' 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.539 20:34:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.799 [2024-11-26 20:34:48.328141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:54.799 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.128 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.128 "name": "raid_bdev1", 00:21:55.128 "aliases": [ 00:21:55.128 "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3" 00:21:55.128 ], 00:21:55.128 "product_name": "Raid Volume", 00:21:55.128 "block_size": 4128, 00:21:55.128 "num_blocks": 7936, 00:21:55.128 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:55.128 "md_size": 32, 00:21:55.128 "md_interleave": true, 00:21:55.128 "dif_type": 0, 00:21:55.128 "assigned_rate_limits": { 00:21:55.128 "rw_ios_per_sec": 0, 00:21:55.128 "rw_mbytes_per_sec": 0, 00:21:55.128 "r_mbytes_per_sec": 0, 00:21:55.128 "w_mbytes_per_sec": 0 00:21:55.128 }, 00:21:55.128 "claimed": false, 00:21:55.128 "zoned": false, 00:21:55.128 "supported_io_types": { 00:21:55.128 "read": true, 00:21:55.128 "write": true, 00:21:55.128 "unmap": false, 00:21:55.128 "flush": false, 00:21:55.128 "reset": true, 00:21:55.128 "nvme_admin": false, 00:21:55.128 "nvme_io": false, 00:21:55.128 "nvme_io_md": false, 00:21:55.128 "write_zeroes": true, 00:21:55.128 "zcopy": false, 00:21:55.128 "get_zone_info": false, 00:21:55.128 "zone_management": false, 00:21:55.128 "zone_append": false, 00:21:55.128 "compare": false, 00:21:55.128 "compare_and_write": false, 00:21:55.128 "abort": false, 00:21:55.128 "seek_hole": false, 00:21:55.128 "seek_data": false, 00:21:55.128 "copy": false, 00:21:55.128 "nvme_iov_md": false 00:21:55.128 }, 00:21:55.128 "memory_domains": [ 00:21:55.128 { 00:21:55.128 "dma_device_id": "system", 00:21:55.128 "dma_device_type": 1 00:21:55.128 }, 00:21:55.128 { 00:21:55.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.128 "dma_device_type": 2 00:21:55.128 }, 00:21:55.128 { 00:21:55.128 "dma_device_id": "system", 00:21:55.128 "dma_device_type": 1 00:21:55.128 }, 00:21:55.128 { 00:21:55.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.128 "dma_device_type": 2 00:21:55.128 } 00:21:55.128 ], 00:21:55.128 "driver_specific": { 00:21:55.128 "raid": { 00:21:55.128 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:55.128 "strip_size_kb": 0, 00:21:55.128 "state": "online", 00:21:55.128 "raid_level": "raid1", 00:21:55.128 "superblock": true, 00:21:55.128 "num_base_bdevs": 2, 00:21:55.128 "num_base_bdevs_discovered": 2, 00:21:55.128 "num_base_bdevs_operational": 2, 00:21:55.128 "base_bdevs_list": [ 00:21:55.128 { 00:21:55.128 "name": "pt1", 00:21:55.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.128 "is_configured": true, 00:21:55.128 "data_offset": 256, 00:21:55.128 "data_size": 7936 00:21:55.128 }, 00:21:55.128 { 00:21:55.128 "name": "pt2", 00:21:55.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.128 "is_configured": true, 00:21:55.128 "data_offset": 256, 00:21:55.128 "data_size": 7936 00:21:55.128 } 00:21:55.128 ] 00:21:55.128 } 00:21:55.128 } 00:21:55.128 }' 00:21:55.128 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:55.129 pt2' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.129 [2024-11-26 20:34:48.579718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8a0bf5fc-b0a4-4c02-a528-abe8342e28d3 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 8a0bf5fc-b0a4-4c02-a528-abe8342e28d3 ']' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.129 [2024-11-26 20:34:48.627289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.129 [2024-11-26 20:34:48.627323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.129 [2024-11-26 20:34:48.627442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.129 [2024-11-26 20:34:48.627528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.129 [2024-11-26 20:34:48.627554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:55.129 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.389 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.390 [2024-11-26 20:34:48.759085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:55.390 [2024-11-26 20:34:48.761275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:55.390 [2024-11-26 20:34:48.761369] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:55.390 [2024-11-26 20:34:48.761433] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:55.390 [2024-11-26 20:34:48.761451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.390 [2024-11-26 20:34:48.761468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:55.390 request: 00:21:55.390 { 00:21:55.390 "name": "raid_bdev1", 00:21:55.390 "raid_level": "raid1", 00:21:55.390 "base_bdevs": [ 00:21:55.390 "malloc1", 00:21:55.390 "malloc2" 00:21:55.390 ], 00:21:55.390 "superblock": false, 00:21:55.390 "method": "bdev_raid_create", 00:21:55.390 "req_id": 1 00:21:55.390 } 00:21:55.390 Got JSON-RPC error response 00:21:55.390 response: 00:21:55.390 { 00:21:55.390 "code": -17, 00:21:55.390 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:55.390 } 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.390 [2024-11-26 20:34:48.822939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:55.390 [2024-11-26 20:34:48.823015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.390 [2024-11-26 20:34:48.823036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:55.390 [2024-11-26 20:34:48.823048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.390 [2024-11-26 20:34:48.825259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.390 [2024-11-26 20:34:48.825302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:55.390 [2024-11-26 20:34:48.825368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:55.390 [2024-11-26 20:34:48.825445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.390 pt1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.390 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.391 "name": "raid_bdev1", 00:21:55.391 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:55.391 "strip_size_kb": 0, 00:21:55.391 "state": "configuring", 00:21:55.391 "raid_level": "raid1", 00:21:55.391 "superblock": true, 00:21:55.391 "num_base_bdevs": 2, 00:21:55.391 "num_base_bdevs_discovered": 1, 00:21:55.391 "num_base_bdevs_operational": 2, 00:21:55.391 "base_bdevs_list": [ 00:21:55.391 { 00:21:55.391 "name": "pt1", 00:21:55.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.391 "is_configured": true, 00:21:55.391 "data_offset": 256, 00:21:55.391 "data_size": 7936 00:21:55.391 }, 00:21:55.391 { 00:21:55.391 "name": null, 00:21:55.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.391 "is_configured": false, 00:21:55.391 "data_offset": 256, 00:21:55.391 "data_size": 7936 00:21:55.391 } 00:21:55.391 ] 00:21:55.391 }' 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.391 20:34:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.961 [2024-11-26 20:34:49.306350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:55.961 [2024-11-26 20:34:49.306437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.961 [2024-11-26 20:34:49.306468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:55.961 [2024-11-26 20:34:49.306482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.961 [2024-11-26 20:34:49.306681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.961 [2024-11-26 20:34:49.306708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:55.961 [2024-11-26 20:34:49.306770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:55.961 [2024-11-26 20:34:49.306801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.961 [2024-11-26 20:34:49.306905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:55.961 [2024-11-26 20:34:49.306924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:55.961 [2024-11-26 20:34:49.307011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:55.961 [2024-11-26 20:34:49.307090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:55.961 [2024-11-26 20:34:49.307102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:55.961 [2024-11-26 20:34:49.307175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.961 pt2 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.961 "name": "raid_bdev1", 00:21:55.961 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:55.961 "strip_size_kb": 0, 00:21:55.961 "state": "online", 00:21:55.961 "raid_level": "raid1", 00:21:55.961 "superblock": true, 00:21:55.961 "num_base_bdevs": 2, 00:21:55.961 "num_base_bdevs_discovered": 2, 00:21:55.961 "num_base_bdevs_operational": 2, 00:21:55.961 "base_bdevs_list": [ 00:21:55.961 { 00:21:55.961 "name": "pt1", 00:21:55.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.961 "is_configured": true, 00:21:55.961 "data_offset": 256, 00:21:55.961 "data_size": 7936 00:21:55.961 }, 00:21:55.961 { 00:21:55.961 "name": "pt2", 00:21:55.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.961 "is_configured": true, 00:21:55.961 "data_offset": 256, 00:21:55.961 "data_size": 7936 00:21:55.961 } 00:21:55.961 ] 00:21:55.961 }' 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.961 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:56.526 [2024-11-26 20:34:49.781860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.526 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:56.526 "name": "raid_bdev1", 00:21:56.526 "aliases": [ 00:21:56.526 "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3" 00:21:56.526 ], 00:21:56.526 "product_name": "Raid Volume", 00:21:56.526 "block_size": 4128, 00:21:56.526 "num_blocks": 7936, 00:21:56.526 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:56.526 "md_size": 32, 00:21:56.526 "md_interleave": true, 00:21:56.526 "dif_type": 0, 00:21:56.526 "assigned_rate_limits": { 00:21:56.526 "rw_ios_per_sec": 0, 00:21:56.526 "rw_mbytes_per_sec": 0, 00:21:56.526 "r_mbytes_per_sec": 0, 00:21:56.526 "w_mbytes_per_sec": 0 00:21:56.526 }, 00:21:56.526 "claimed": false, 00:21:56.526 "zoned": false, 00:21:56.526 "supported_io_types": { 00:21:56.526 "read": true, 00:21:56.526 "write": true, 00:21:56.526 "unmap": false, 00:21:56.526 "flush": false, 00:21:56.526 "reset": true, 00:21:56.526 "nvme_admin": false, 00:21:56.526 "nvme_io": false, 00:21:56.526 "nvme_io_md": false, 00:21:56.526 "write_zeroes": true, 00:21:56.526 "zcopy": false, 00:21:56.526 "get_zone_info": false, 00:21:56.526 "zone_management": false, 00:21:56.526 "zone_append": false, 00:21:56.526 "compare": false, 00:21:56.526 "compare_and_write": false, 00:21:56.526 "abort": false, 00:21:56.526 "seek_hole": false, 00:21:56.526 "seek_data": false, 00:21:56.526 "copy": false, 00:21:56.526 "nvme_iov_md": false 00:21:56.526 }, 00:21:56.526 "memory_domains": [ 00:21:56.526 { 00:21:56.526 "dma_device_id": "system", 00:21:56.526 "dma_device_type": 1 00:21:56.526 }, 00:21:56.526 { 00:21:56.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.526 "dma_device_type": 2 00:21:56.526 }, 00:21:56.526 { 00:21:56.526 "dma_device_id": "system", 00:21:56.526 "dma_device_type": 1 00:21:56.527 }, 00:21:56.527 { 00:21:56.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.527 "dma_device_type": 2 00:21:56.527 } 00:21:56.527 ], 00:21:56.527 "driver_specific": { 00:21:56.527 "raid": { 00:21:56.527 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:56.527 "strip_size_kb": 0, 00:21:56.527 "state": "online", 00:21:56.527 "raid_level": "raid1", 00:21:56.527 "superblock": true, 00:21:56.527 "num_base_bdevs": 2, 00:21:56.527 "num_base_bdevs_discovered": 2, 00:21:56.527 "num_base_bdevs_operational": 2, 00:21:56.527 "base_bdevs_list": [ 00:21:56.527 { 00:21:56.527 "name": "pt1", 00:21:56.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.527 "is_configured": true, 00:21:56.527 "data_offset": 256, 00:21:56.527 "data_size": 7936 00:21:56.527 }, 00:21:56.527 { 00:21:56.527 "name": "pt2", 00:21:56.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.527 "is_configured": true, 00:21:56.527 "data_offset": 256, 00:21:56.527 "data_size": 7936 00:21:56.527 } 00:21:56.527 ] 00:21:56.527 } 00:21:56.527 } 00:21:56.527 }' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:56.527 pt2' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.527 20:34:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.527 [2024-11-26 20:34:49.997677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 8a0bf5fc-b0a4-4c02-a528-abe8342e28d3 '!=' 8a0bf5fc-b0a4-4c02-a528-abe8342e28d3 ']' 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.527 [2024-11-26 20:34:50.033352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.527 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.785 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.785 "name": "raid_bdev1", 00:21:56.785 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:56.785 "strip_size_kb": 0, 00:21:56.785 "state": "online", 00:21:56.785 "raid_level": "raid1", 00:21:56.785 "superblock": true, 00:21:56.785 "num_base_bdevs": 2, 00:21:56.785 "num_base_bdevs_discovered": 1, 00:21:56.785 "num_base_bdevs_operational": 1, 00:21:56.785 "base_bdevs_list": [ 00:21:56.785 { 00:21:56.785 "name": null, 00:21:56.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.785 "is_configured": false, 00:21:56.785 "data_offset": 0, 00:21:56.785 "data_size": 7936 00:21:56.785 }, 00:21:56.785 { 00:21:56.785 "name": "pt2", 00:21:56.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.785 "is_configured": true, 00:21:56.785 "data_offset": 256, 00:21:56.785 "data_size": 7936 00:21:56.785 } 00:21:56.785 ] 00:21:56.785 }' 00:21:56.785 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.785 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.044 [2024-11-26 20:34:50.445102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.044 [2024-11-26 20:34:50.445138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.044 [2024-11-26 20:34:50.445231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.044 [2024-11-26 20:34:50.445303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.044 [2024-11-26 20:34:50.445323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.044 [2024-11-26 20:34:50.505108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:57.044 [2024-11-26 20:34:50.505174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.044 [2024-11-26 20:34:50.505198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:57.044 [2024-11-26 20:34:50.505213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.044 [2024-11-26 20:34:50.507596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.044 [2024-11-26 20:34:50.507641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:57.044 [2024-11-26 20:34:50.507700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:57.044 [2024-11-26 20:34:50.507772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:57.044 [2024-11-26 20:34:50.507856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:57.044 [2024-11-26 20:34:50.507876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:57.044 [2024-11-26 20:34:50.507984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:57.044 [2024-11-26 20:34:50.508069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:57.044 [2024-11-26 20:34:50.508082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:57.044 [2024-11-26 20:34:50.508158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.044 pt2 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.044 "name": "raid_bdev1", 00:21:57.044 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:57.044 "strip_size_kb": 0, 00:21:57.044 "state": "online", 00:21:57.044 "raid_level": "raid1", 00:21:57.044 "superblock": true, 00:21:57.044 "num_base_bdevs": 2, 00:21:57.044 "num_base_bdevs_discovered": 1, 00:21:57.044 "num_base_bdevs_operational": 1, 00:21:57.044 "base_bdevs_list": [ 00:21:57.044 { 00:21:57.044 "name": null, 00:21:57.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.044 "is_configured": false, 00:21:57.044 "data_offset": 256, 00:21:57.044 "data_size": 7936 00:21:57.044 }, 00:21:57.044 { 00:21:57.044 "name": "pt2", 00:21:57.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.044 "is_configured": true, 00:21:57.044 "data_offset": 256, 00:21:57.044 "data_size": 7936 00:21:57.044 } 00:21:57.044 ] 00:21:57.044 }' 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.044 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 [2024-11-26 20:34:50.988313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.612 [2024-11-26 20:34:50.988348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.612 [2024-11-26 20:34:50.988431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.612 [2024-11-26 20:34:50.988499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.612 [2024-11-26 20:34:50.988516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 20:34:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 [2024-11-26 20:34:51.048236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:57.612 [2024-11-26 20:34:51.048320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.612 [2024-11-26 20:34:51.048346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:57.612 [2024-11-26 20:34:51.048359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.612 [2024-11-26 20:34:51.050595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.612 [2024-11-26 20:34:51.050636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:57.612 [2024-11-26 20:34:51.050703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:57.612 [2024-11-26 20:34:51.050763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:57.612 [2024-11-26 20:34:51.050884] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:57.612 [2024-11-26 20:34:51.050904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.612 [2024-11-26 20:34:51.050928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:57.612 [2024-11-26 20:34:51.051003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:57.612 [2024-11-26 20:34:51.051100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:57.612 [2024-11-26 20:34:51.051114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:57.612 [2024-11-26 20:34:51.051196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:57.612 [2024-11-26 20:34:51.051280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:57.612 [2024-11-26 20:34:51.051296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:57.612 [2024-11-26 20:34:51.051377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.612 pt1 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.612 "name": "raid_bdev1", 00:21:57.612 "uuid": "8a0bf5fc-b0a4-4c02-a528-abe8342e28d3", 00:21:57.612 "strip_size_kb": 0, 00:21:57.612 "state": "online", 00:21:57.612 "raid_level": "raid1", 00:21:57.612 "superblock": true, 00:21:57.612 "num_base_bdevs": 2, 00:21:57.612 "num_base_bdevs_discovered": 1, 00:21:57.612 "num_base_bdevs_operational": 1, 00:21:57.612 "base_bdevs_list": [ 00:21:57.612 { 00:21:57.612 "name": null, 00:21:57.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.612 "is_configured": false, 00:21:57.612 "data_offset": 256, 00:21:57.612 "data_size": 7936 00:21:57.612 }, 00:21:57.612 { 00:21:57.612 "name": "pt2", 00:21:57.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.612 "is_configured": true, 00:21:57.612 "data_offset": 256, 00:21:57.612 "data_size": 7936 00:21:57.612 } 00:21:57.612 ] 00:21:57.612 }' 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.612 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:58.180 [2024-11-26 20:34:51.567647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 8a0bf5fc-b0a4-4c02-a528-abe8342e28d3 '!=' 8a0bf5fc-b0a4-4c02-a528-abe8342e28d3 ']' 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89200 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89200 ']' 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89200 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89200 00:21:58.180 killing process with pid 89200 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89200' 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89200 00:21:58.180 20:34:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89200 00:21:58.180 [2024-11-26 20:34:51.635219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.180 [2024-11-26 20:34:51.635332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.180 [2024-11-26 20:34:51.635395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.180 [2024-11-26 20:34:51.635417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:58.438 [2024-11-26 20:34:51.889678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.814 20:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:59.814 00:21:59.814 real 0m6.419s 00:21:59.814 user 0m9.645s 00:21:59.814 sys 0m0.997s 00:21:59.814 20:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.814 ************************************ 00:21:59.814 END TEST raid_superblock_test_md_interleaved 00:21:59.814 ************************************ 00:21:59.814 20:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.814 20:34:53 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:59.814 20:34:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:59.814 20:34:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.814 20:34:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.814 ************************************ 00:21:59.814 START TEST raid_rebuild_test_sb_md_interleaved 00:21:59.814 ************************************ 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89529 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89529 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89529 ']' 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.814 20:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.073 [2024-11-26 20:34:53.373224] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:22:00.073 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:00.073 Zero copy mechanism will not be used. 00:22:00.073 [2024-11-26 20:34:53.373873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89529 ] 00:22:00.073 [2024-11-26 20:34:53.552081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.332 [2024-11-26 20:34:53.692268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.592 [2024-11-26 20:34:53.934065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.592 [2024-11-26 20:34:53.934141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.851 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.852 BaseBdev1_malloc 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.852 [2024-11-26 20:34:54.350315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:00.852 [2024-11-26 20:34:54.350382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.852 [2024-11-26 20:34:54.350407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:00.852 [2024-11-26 20:34:54.350420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.852 [2024-11-26 20:34:54.352572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.852 [2024-11-26 20:34:54.352616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:00.852 BaseBdev1 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.852 BaseBdev2_malloc 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.852 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 [2024-11-26 20:34:54.408977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:01.110 [2024-11-26 20:34:54.409049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.110 [2024-11-26 20:34:54.409073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:01.110 [2024-11-26 20:34:54.409088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.110 [2024-11-26 20:34:54.411197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.110 [2024-11-26 20:34:54.411254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:01.110 BaseBdev2 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 spare_malloc 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 spare_delay 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 [2024-11-26 20:34:54.493169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.110 [2024-11-26 20:34:54.493253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.110 [2024-11-26 20:34:54.493278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:01.110 [2024-11-26 20:34:54.493292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.110 [2024-11-26 20:34:54.495455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.110 [2024-11-26 20:34:54.495501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.110 spare 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 [2024-11-26 20:34:54.501207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.110 [2024-11-26 20:34:54.503282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.110 [2024-11-26 20:34:54.503509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:01.110 [2024-11-26 20:34:54.503536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:01.110 [2024-11-26 20:34:54.503620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:01.110 [2024-11-26 20:34:54.503709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:01.110 [2024-11-26 20:34:54.503722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:01.110 [2024-11-26 20:34:54.503803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.110 "name": "raid_bdev1", 00:22:01.110 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:01.110 "strip_size_kb": 0, 00:22:01.110 "state": "online", 00:22:01.110 "raid_level": "raid1", 00:22:01.110 "superblock": true, 00:22:01.110 "num_base_bdevs": 2, 00:22:01.110 "num_base_bdevs_discovered": 2, 00:22:01.110 "num_base_bdevs_operational": 2, 00:22:01.110 "base_bdevs_list": [ 00:22:01.110 { 00:22:01.110 "name": "BaseBdev1", 00:22:01.110 "uuid": "f42f121e-e9e1-5bf6-b98f-da3acddb44dc", 00:22:01.110 "is_configured": true, 00:22:01.110 "data_offset": 256, 00:22:01.110 "data_size": 7936 00:22:01.110 }, 00:22:01.110 { 00:22:01.110 "name": "BaseBdev2", 00:22:01.110 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:01.110 "is_configured": true, 00:22:01.110 "data_offset": 256, 00:22:01.110 "data_size": 7936 00:22:01.110 } 00:22:01.110 ] 00:22:01.110 }' 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.110 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.679 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:01.679 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:01.679 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.679 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.679 [2024-11-26 20:34:54.964818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.679 20:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.679 [2024-11-26 20:34:55.064377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.679 "name": "raid_bdev1", 00:22:01.679 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:01.679 "strip_size_kb": 0, 00:22:01.679 "state": "online", 00:22:01.679 "raid_level": "raid1", 00:22:01.679 "superblock": true, 00:22:01.679 "num_base_bdevs": 2, 00:22:01.679 "num_base_bdevs_discovered": 1, 00:22:01.679 "num_base_bdevs_operational": 1, 00:22:01.679 "base_bdevs_list": [ 00:22:01.679 { 00:22:01.679 "name": null, 00:22:01.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.679 "is_configured": false, 00:22:01.679 "data_offset": 0, 00:22:01.679 "data_size": 7936 00:22:01.679 }, 00:22:01.679 { 00:22:01.679 "name": "BaseBdev2", 00:22:01.679 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:01.679 "is_configured": true, 00:22:01.679 "data_offset": 256, 00:22:01.679 "data_size": 7936 00:22:01.679 } 00:22:01.679 ] 00:22:01.679 }' 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.679 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.248 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:02.248 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.248 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.248 [2024-11-26 20:34:55.523648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.248 [2024-11-26 20:34:55.544274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:02.248 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.248 20:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:02.248 [2024-11-26 20:34:55.546611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.194 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.194 "name": "raid_bdev1", 00:22:03.194 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:03.194 "strip_size_kb": 0, 00:22:03.194 "state": "online", 00:22:03.194 "raid_level": "raid1", 00:22:03.194 "superblock": true, 00:22:03.194 "num_base_bdevs": 2, 00:22:03.194 "num_base_bdevs_discovered": 2, 00:22:03.194 "num_base_bdevs_operational": 2, 00:22:03.194 "process": { 00:22:03.194 "type": "rebuild", 00:22:03.194 "target": "spare", 00:22:03.194 "progress": { 00:22:03.194 "blocks": 2560, 00:22:03.194 "percent": 32 00:22:03.194 } 00:22:03.194 }, 00:22:03.194 "base_bdevs_list": [ 00:22:03.194 { 00:22:03.195 "name": "spare", 00:22:03.195 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:03.195 "is_configured": true, 00:22:03.195 "data_offset": 256, 00:22:03.195 "data_size": 7936 00:22:03.195 }, 00:22:03.195 { 00:22:03.195 "name": "BaseBdev2", 00:22:03.195 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:03.195 "is_configured": true, 00:22:03.195 "data_offset": 256, 00:22:03.195 "data_size": 7936 00:22:03.195 } 00:22:03.195 ] 00:22:03.195 }' 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.195 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.195 [2024-11-26 20:34:56.677938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.453 [2024-11-26 20:34:56.753118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:03.453 [2024-11-26 20:34:56.753228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.453 [2024-11-26 20:34:56.753258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.453 [2024-11-26 20:34:56.753274] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.453 "name": "raid_bdev1", 00:22:03.453 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:03.453 "strip_size_kb": 0, 00:22:03.453 "state": "online", 00:22:03.453 "raid_level": "raid1", 00:22:03.453 "superblock": true, 00:22:03.453 "num_base_bdevs": 2, 00:22:03.453 "num_base_bdevs_discovered": 1, 00:22:03.453 "num_base_bdevs_operational": 1, 00:22:03.453 "base_bdevs_list": [ 00:22:03.453 { 00:22:03.453 "name": null, 00:22:03.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.453 "is_configured": false, 00:22:03.453 "data_offset": 0, 00:22:03.453 "data_size": 7936 00:22:03.453 }, 00:22:03.453 { 00:22:03.453 "name": "BaseBdev2", 00:22:03.453 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:03.453 "is_configured": true, 00:22:03.453 "data_offset": 256, 00:22:03.453 "data_size": 7936 00:22:03.453 } 00:22:03.453 ] 00:22:03.453 }' 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.453 20:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.711 "name": "raid_bdev1", 00:22:03.711 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:03.711 "strip_size_kb": 0, 00:22:03.711 "state": "online", 00:22:03.711 "raid_level": "raid1", 00:22:03.711 "superblock": true, 00:22:03.711 "num_base_bdevs": 2, 00:22:03.711 "num_base_bdevs_discovered": 1, 00:22:03.711 "num_base_bdevs_operational": 1, 00:22:03.711 "base_bdevs_list": [ 00:22:03.711 { 00:22:03.711 "name": null, 00:22:03.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.711 "is_configured": false, 00:22:03.711 "data_offset": 0, 00:22:03.711 "data_size": 7936 00:22:03.711 }, 00:22:03.711 { 00:22:03.711 "name": "BaseBdev2", 00:22:03.711 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:03.711 "is_configured": true, 00:22:03.711 "data_offset": 256, 00:22:03.711 "data_size": 7936 00:22:03.711 } 00:22:03.711 ] 00:22:03.711 }' 00:22:03.711 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.971 [2024-11-26 20:34:57.357126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.971 [2024-11-26 20:34:57.376860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.971 20:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:03.971 [2024-11-26 20:34:57.379010] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.909 "name": "raid_bdev1", 00:22:04.909 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:04.909 "strip_size_kb": 0, 00:22:04.909 "state": "online", 00:22:04.909 "raid_level": "raid1", 00:22:04.909 "superblock": true, 00:22:04.909 "num_base_bdevs": 2, 00:22:04.909 "num_base_bdevs_discovered": 2, 00:22:04.909 "num_base_bdevs_operational": 2, 00:22:04.909 "process": { 00:22:04.909 "type": "rebuild", 00:22:04.909 "target": "spare", 00:22:04.909 "progress": { 00:22:04.909 "blocks": 2560, 00:22:04.909 "percent": 32 00:22:04.909 } 00:22:04.909 }, 00:22:04.909 "base_bdevs_list": [ 00:22:04.909 { 00:22:04.909 "name": "spare", 00:22:04.909 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:04.909 "is_configured": true, 00:22:04.909 "data_offset": 256, 00:22:04.909 "data_size": 7936 00:22:04.909 }, 00:22:04.909 { 00:22:04.909 "name": "BaseBdev2", 00:22:04.909 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:04.909 "is_configured": true, 00:22:04.909 "data_offset": 256, 00:22:04.909 "data_size": 7936 00:22:04.909 } 00:22:04.909 ] 00:22:04.909 }' 00:22:04.909 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:05.167 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=771 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.167 "name": "raid_bdev1", 00:22:05.167 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:05.167 "strip_size_kb": 0, 00:22:05.167 "state": "online", 00:22:05.167 "raid_level": "raid1", 00:22:05.167 "superblock": true, 00:22:05.167 "num_base_bdevs": 2, 00:22:05.167 "num_base_bdevs_discovered": 2, 00:22:05.167 "num_base_bdevs_operational": 2, 00:22:05.167 "process": { 00:22:05.167 "type": "rebuild", 00:22:05.167 "target": "spare", 00:22:05.167 "progress": { 00:22:05.167 "blocks": 2816, 00:22:05.167 "percent": 35 00:22:05.167 } 00:22:05.167 }, 00:22:05.167 "base_bdevs_list": [ 00:22:05.167 { 00:22:05.167 "name": "spare", 00:22:05.167 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:05.167 "is_configured": true, 00:22:05.167 "data_offset": 256, 00:22:05.167 "data_size": 7936 00:22:05.167 }, 00:22:05.167 { 00:22:05.167 "name": "BaseBdev2", 00:22:05.167 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:05.167 "is_configured": true, 00:22:05.167 "data_offset": 256, 00:22:05.167 "data_size": 7936 00:22:05.167 } 00:22:05.167 ] 00:22:05.167 }' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.167 20:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.099 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.357 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.357 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.357 "name": "raid_bdev1", 00:22:06.357 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:06.357 "strip_size_kb": 0, 00:22:06.357 "state": "online", 00:22:06.357 "raid_level": "raid1", 00:22:06.357 "superblock": true, 00:22:06.357 "num_base_bdevs": 2, 00:22:06.357 "num_base_bdevs_discovered": 2, 00:22:06.357 "num_base_bdevs_operational": 2, 00:22:06.357 "process": { 00:22:06.357 "type": "rebuild", 00:22:06.357 "target": "spare", 00:22:06.357 "progress": { 00:22:06.357 "blocks": 5632, 00:22:06.357 "percent": 70 00:22:06.357 } 00:22:06.357 }, 00:22:06.357 "base_bdevs_list": [ 00:22:06.357 { 00:22:06.357 "name": "spare", 00:22:06.357 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:06.357 "is_configured": true, 00:22:06.357 "data_offset": 256, 00:22:06.357 "data_size": 7936 00:22:06.357 }, 00:22:06.357 { 00:22:06.357 "name": "BaseBdev2", 00:22:06.357 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:06.357 "is_configured": true, 00:22:06.357 "data_offset": 256, 00:22:06.358 "data_size": 7936 00:22:06.358 } 00:22:06.358 ] 00:22:06.358 }' 00:22:06.358 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.358 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.358 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.358 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.358 20:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:07.290 [2024-11-26 20:35:00.495520] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:07.290 [2024-11-26 20:35:00.495618] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:07.290 [2024-11-26 20:35:00.495767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.290 "name": "raid_bdev1", 00:22:07.290 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:07.290 "strip_size_kb": 0, 00:22:07.290 "state": "online", 00:22:07.290 "raid_level": "raid1", 00:22:07.290 "superblock": true, 00:22:07.290 "num_base_bdevs": 2, 00:22:07.290 "num_base_bdevs_discovered": 2, 00:22:07.290 "num_base_bdevs_operational": 2, 00:22:07.290 "base_bdevs_list": [ 00:22:07.290 { 00:22:07.290 "name": "spare", 00:22:07.290 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:07.290 "is_configured": true, 00:22:07.290 "data_offset": 256, 00:22:07.290 "data_size": 7936 00:22:07.290 }, 00:22:07.290 { 00:22:07.290 "name": "BaseBdev2", 00:22:07.290 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:07.290 "is_configured": true, 00:22:07.290 "data_offset": 256, 00:22:07.290 "data_size": 7936 00:22:07.290 } 00:22:07.290 ] 00:22:07.290 }' 00:22:07.290 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.548 "name": "raid_bdev1", 00:22:07.548 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:07.548 "strip_size_kb": 0, 00:22:07.548 "state": "online", 00:22:07.548 "raid_level": "raid1", 00:22:07.548 "superblock": true, 00:22:07.548 "num_base_bdevs": 2, 00:22:07.548 "num_base_bdevs_discovered": 2, 00:22:07.548 "num_base_bdevs_operational": 2, 00:22:07.548 "base_bdevs_list": [ 00:22:07.548 { 00:22:07.548 "name": "spare", 00:22:07.548 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:07.548 "is_configured": true, 00:22:07.548 "data_offset": 256, 00:22:07.548 "data_size": 7936 00:22:07.548 }, 00:22:07.548 { 00:22:07.548 "name": "BaseBdev2", 00:22:07.548 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:07.548 "is_configured": true, 00:22:07.548 "data_offset": 256, 00:22:07.548 "data_size": 7936 00:22:07.548 } 00:22:07.548 ] 00:22:07.548 }' 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:07.548 20:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.548 "name": "raid_bdev1", 00:22:07.548 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:07.548 "strip_size_kb": 0, 00:22:07.548 "state": "online", 00:22:07.548 "raid_level": "raid1", 00:22:07.548 "superblock": true, 00:22:07.548 "num_base_bdevs": 2, 00:22:07.548 "num_base_bdevs_discovered": 2, 00:22:07.548 "num_base_bdevs_operational": 2, 00:22:07.548 "base_bdevs_list": [ 00:22:07.548 { 00:22:07.548 "name": "spare", 00:22:07.548 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:07.548 "is_configured": true, 00:22:07.548 "data_offset": 256, 00:22:07.548 "data_size": 7936 00:22:07.548 }, 00:22:07.548 { 00:22:07.548 "name": "BaseBdev2", 00:22:07.548 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:07.548 "is_configured": true, 00:22:07.548 "data_offset": 256, 00:22:07.548 "data_size": 7936 00:22:07.548 } 00:22:07.548 ] 00:22:07.548 }' 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.548 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 [2024-11-26 20:35:01.413618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.113 [2024-11-26 20:35:01.413656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.113 [2024-11-26 20:35:01.413766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.113 [2024-11-26 20:35:01.413853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.113 [2024-11-26 20:35:01.413874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 [2024-11-26 20:35:01.477484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.113 [2024-11-26 20:35:01.477558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.113 [2024-11-26 20:35:01.477586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:08.113 [2024-11-26 20:35:01.477602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.113 [2024-11-26 20:35:01.479876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.113 [2024-11-26 20:35:01.479918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.113 [2024-11-26 20:35:01.479990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:08.113 [2024-11-26 20:35:01.480053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:08.113 [2024-11-26 20:35:01.480199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.113 spare 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 [2024-11-26 20:35:01.580144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:08.113 [2024-11-26 20:35:01.580213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:08.113 [2024-11-26 20:35:01.580380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:08.113 [2024-11-26 20:35:01.580520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:08.113 [2024-11-26 20:35:01.580540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:08.113 [2024-11-26 20:35:01.580668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.113 "name": "raid_bdev1", 00:22:08.113 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:08.113 "strip_size_kb": 0, 00:22:08.113 "state": "online", 00:22:08.113 "raid_level": "raid1", 00:22:08.113 "superblock": true, 00:22:08.113 "num_base_bdevs": 2, 00:22:08.113 "num_base_bdevs_discovered": 2, 00:22:08.113 "num_base_bdevs_operational": 2, 00:22:08.113 "base_bdevs_list": [ 00:22:08.113 { 00:22:08.113 "name": "spare", 00:22:08.113 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:08.113 "is_configured": true, 00:22:08.113 "data_offset": 256, 00:22:08.113 "data_size": 7936 00:22:08.113 }, 00:22:08.113 { 00:22:08.113 "name": "BaseBdev2", 00:22:08.113 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:08.113 "is_configured": true, 00:22:08.113 "data_offset": 256, 00:22:08.113 "data_size": 7936 00:22:08.113 } 00:22:08.113 ] 00:22:08.113 }' 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.113 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.682 20:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:08.682 "name": "raid_bdev1", 00:22:08.682 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:08.682 "strip_size_kb": 0, 00:22:08.682 "state": "online", 00:22:08.682 "raid_level": "raid1", 00:22:08.682 "superblock": true, 00:22:08.682 "num_base_bdevs": 2, 00:22:08.682 "num_base_bdevs_discovered": 2, 00:22:08.682 "num_base_bdevs_operational": 2, 00:22:08.682 "base_bdevs_list": [ 00:22:08.682 { 00:22:08.682 "name": "spare", 00:22:08.682 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:08.682 "is_configured": true, 00:22:08.682 "data_offset": 256, 00:22:08.682 "data_size": 7936 00:22:08.682 }, 00:22:08.682 { 00:22:08.682 "name": "BaseBdev2", 00:22:08.682 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:08.682 "is_configured": true, 00:22:08.682 "data_offset": 256, 00:22:08.682 "data_size": 7936 00:22:08.682 } 00:22:08.682 ] 00:22:08.682 }' 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.682 [2024-11-26 20:35:02.148831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.682 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.682 "name": "raid_bdev1", 00:22:08.682 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:08.682 "strip_size_kb": 0, 00:22:08.682 "state": "online", 00:22:08.682 "raid_level": "raid1", 00:22:08.682 "superblock": true, 00:22:08.682 "num_base_bdevs": 2, 00:22:08.682 "num_base_bdevs_discovered": 1, 00:22:08.682 "num_base_bdevs_operational": 1, 00:22:08.682 "base_bdevs_list": [ 00:22:08.682 { 00:22:08.682 "name": null, 00:22:08.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.683 "is_configured": false, 00:22:08.683 "data_offset": 0, 00:22:08.683 "data_size": 7936 00:22:08.683 }, 00:22:08.683 { 00:22:08.683 "name": "BaseBdev2", 00:22:08.683 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:08.683 "is_configured": true, 00:22:08.683 "data_offset": 256, 00:22:08.683 "data_size": 7936 00:22:08.683 } 00:22:08.683 ] 00:22:08.683 }' 00:22:08.683 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.683 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.248 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.248 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.248 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.248 [2024-11-26 20:35:02.580122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.248 [2024-11-26 20:35:02.580372] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:09.248 [2024-11-26 20:35:02.580403] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:09.248 [2024-11-26 20:35:02.580450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.248 [2024-11-26 20:35:02.599918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:09.248 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.248 20:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:09.248 [2024-11-26 20:35:02.602133] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.181 "name": "raid_bdev1", 00:22:10.181 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:10.181 "strip_size_kb": 0, 00:22:10.181 "state": "online", 00:22:10.181 "raid_level": "raid1", 00:22:10.181 "superblock": true, 00:22:10.181 "num_base_bdevs": 2, 00:22:10.181 "num_base_bdevs_discovered": 2, 00:22:10.181 "num_base_bdevs_operational": 2, 00:22:10.181 "process": { 00:22:10.181 "type": "rebuild", 00:22:10.181 "target": "spare", 00:22:10.181 "progress": { 00:22:10.181 "blocks": 2560, 00:22:10.181 "percent": 32 00:22:10.181 } 00:22:10.181 }, 00:22:10.181 "base_bdevs_list": [ 00:22:10.181 { 00:22:10.181 "name": "spare", 00:22:10.181 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:10.181 "is_configured": true, 00:22:10.181 "data_offset": 256, 00:22:10.181 "data_size": 7936 00:22:10.181 }, 00:22:10.181 { 00:22:10.181 "name": "BaseBdev2", 00:22:10.181 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:10.181 "is_configured": true, 00:22:10.181 "data_offset": 256, 00:22:10.181 "data_size": 7936 00:22:10.181 } 00:22:10.181 ] 00:22:10.181 }' 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.181 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.439 [2024-11-26 20:35:03.745468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.439 [2024-11-26 20:35:03.808474] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:10.439 [2024-11-26 20:35:03.808674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.439 [2024-11-26 20:35:03.808697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.439 [2024-11-26 20:35:03.808708] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.439 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.439 "name": "raid_bdev1", 00:22:10.440 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:10.440 "strip_size_kb": 0, 00:22:10.440 "state": "online", 00:22:10.440 "raid_level": "raid1", 00:22:10.440 "superblock": true, 00:22:10.440 "num_base_bdevs": 2, 00:22:10.440 "num_base_bdevs_discovered": 1, 00:22:10.440 "num_base_bdevs_operational": 1, 00:22:10.440 "base_bdevs_list": [ 00:22:10.440 { 00:22:10.440 "name": null, 00:22:10.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.440 "is_configured": false, 00:22:10.440 "data_offset": 0, 00:22:10.440 "data_size": 7936 00:22:10.440 }, 00:22:10.440 { 00:22:10.440 "name": "BaseBdev2", 00:22:10.440 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:10.440 "is_configured": true, 00:22:10.440 "data_offset": 256, 00:22:10.440 "data_size": 7936 00:22:10.440 } 00:22:10.440 ] 00:22:10.440 }' 00:22:10.440 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.440 20:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.007 20:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:11.007 20:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.007 20:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.007 [2024-11-26 20:35:04.307866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:11.007 [2024-11-26 20:35:04.308004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.007 [2024-11-26 20:35:04.308068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:11.007 [2024-11-26 20:35:04.308107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.007 [2024-11-26 20:35:04.308370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.007 [2024-11-26 20:35:04.308437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:11.007 [2024-11-26 20:35:04.308533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:11.007 [2024-11-26 20:35:04.308578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:11.007 [2024-11-26 20:35:04.308626] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:11.007 [2024-11-26 20:35:04.308687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:11.007 [2024-11-26 20:35:04.328616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:11.007 spare 00:22:11.007 20:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.007 20:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:11.007 [2024-11-26 20:35:04.330828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.013 "name": "raid_bdev1", 00:22:12.013 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:12.013 "strip_size_kb": 0, 00:22:12.013 "state": "online", 00:22:12.013 "raid_level": "raid1", 00:22:12.013 "superblock": true, 00:22:12.013 "num_base_bdevs": 2, 00:22:12.013 "num_base_bdevs_discovered": 2, 00:22:12.013 "num_base_bdevs_operational": 2, 00:22:12.013 "process": { 00:22:12.013 "type": "rebuild", 00:22:12.013 "target": "spare", 00:22:12.013 "progress": { 00:22:12.013 "blocks": 2560, 00:22:12.013 "percent": 32 00:22:12.013 } 00:22:12.013 }, 00:22:12.013 "base_bdevs_list": [ 00:22:12.013 { 00:22:12.013 "name": "spare", 00:22:12.013 "uuid": "f4e06f3c-d8b1-5175-bf66-75b8d1337acb", 00:22:12.013 "is_configured": true, 00:22:12.013 "data_offset": 256, 00:22:12.013 "data_size": 7936 00:22:12.013 }, 00:22:12.013 { 00:22:12.013 "name": "BaseBdev2", 00:22:12.013 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:12.013 "is_configured": true, 00:22:12.013 "data_offset": 256, 00:22:12.013 "data_size": 7936 00:22:12.013 } 00:22:12.013 ] 00:22:12.013 }' 00:22:12.013 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.014 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.014 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.014 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.014 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:12.014 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.014 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.014 [2024-11-26 20:35:05.474460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.014 [2024-11-26 20:35:05.537208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:12.014 [2024-11-26 20:35:05.537305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.014 [2024-11-26 20:35:05.537339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.014 [2024-11-26 20:35:05.537348] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:12.273 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.274 "name": "raid_bdev1", 00:22:12.274 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:12.274 "strip_size_kb": 0, 00:22:12.274 "state": "online", 00:22:12.274 "raid_level": "raid1", 00:22:12.274 "superblock": true, 00:22:12.274 "num_base_bdevs": 2, 00:22:12.274 "num_base_bdevs_discovered": 1, 00:22:12.274 "num_base_bdevs_operational": 1, 00:22:12.274 "base_bdevs_list": [ 00:22:12.274 { 00:22:12.274 "name": null, 00:22:12.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.274 "is_configured": false, 00:22:12.274 "data_offset": 0, 00:22:12.274 "data_size": 7936 00:22:12.274 }, 00:22:12.274 { 00:22:12.274 "name": "BaseBdev2", 00:22:12.274 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:12.274 "is_configured": true, 00:22:12.274 "data_offset": 256, 00:22:12.274 "data_size": 7936 00:22:12.274 } 00:22:12.274 ] 00:22:12.274 }' 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.274 20:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.534 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.535 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.535 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.535 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.794 "name": "raid_bdev1", 00:22:12.794 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:12.794 "strip_size_kb": 0, 00:22:12.794 "state": "online", 00:22:12.794 "raid_level": "raid1", 00:22:12.794 "superblock": true, 00:22:12.794 "num_base_bdevs": 2, 00:22:12.794 "num_base_bdevs_discovered": 1, 00:22:12.794 "num_base_bdevs_operational": 1, 00:22:12.794 "base_bdevs_list": [ 00:22:12.794 { 00:22:12.794 "name": null, 00:22:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.794 "is_configured": false, 00:22:12.794 "data_offset": 0, 00:22:12.794 "data_size": 7936 00:22:12.794 }, 00:22:12.794 { 00:22:12.794 "name": "BaseBdev2", 00:22:12.794 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:12.794 "is_configured": true, 00:22:12.794 "data_offset": 256, 00:22:12.794 "data_size": 7936 00:22:12.794 } 00:22:12.794 ] 00:22:12.794 }' 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.794 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:12.795 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.795 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.795 [2024-11-26 20:35:06.235085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:12.795 [2024-11-26 20:35:06.235209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.795 [2024-11-26 20:35:06.235254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:12.795 [2024-11-26 20:35:06.235266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.795 [2024-11-26 20:35:06.235471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.795 [2024-11-26 20:35:06.235488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:12.795 [2024-11-26 20:35:06.235547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:12.795 [2024-11-26 20:35:06.235561] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:12.795 [2024-11-26 20:35:06.235572] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:12.795 [2024-11-26 20:35:06.235583] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:12.795 BaseBdev1 00:22:12.795 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.795 20:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.732 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.992 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.992 "name": "raid_bdev1", 00:22:13.992 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:13.992 "strip_size_kb": 0, 00:22:13.992 "state": "online", 00:22:13.992 "raid_level": "raid1", 00:22:13.992 "superblock": true, 00:22:13.992 "num_base_bdevs": 2, 00:22:13.992 "num_base_bdevs_discovered": 1, 00:22:13.992 "num_base_bdevs_operational": 1, 00:22:13.992 "base_bdevs_list": [ 00:22:13.992 { 00:22:13.992 "name": null, 00:22:13.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.992 "is_configured": false, 00:22:13.992 "data_offset": 0, 00:22:13.992 "data_size": 7936 00:22:13.992 }, 00:22:13.992 { 00:22:13.992 "name": "BaseBdev2", 00:22:13.992 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:13.992 "is_configured": true, 00:22:13.992 "data_offset": 256, 00:22:13.992 "data_size": 7936 00:22:13.992 } 00:22:13.992 ] 00:22:13.992 }' 00:22:13.992 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.992 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.252 "name": "raid_bdev1", 00:22:14.252 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:14.252 "strip_size_kb": 0, 00:22:14.252 "state": "online", 00:22:14.252 "raid_level": "raid1", 00:22:14.252 "superblock": true, 00:22:14.252 "num_base_bdevs": 2, 00:22:14.252 "num_base_bdevs_discovered": 1, 00:22:14.252 "num_base_bdevs_operational": 1, 00:22:14.252 "base_bdevs_list": [ 00:22:14.252 { 00:22:14.252 "name": null, 00:22:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.252 "is_configured": false, 00:22:14.252 "data_offset": 0, 00:22:14.252 "data_size": 7936 00:22:14.252 }, 00:22:14.252 { 00:22:14.252 "name": "BaseBdev2", 00:22:14.252 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:14.252 "is_configured": true, 00:22:14.252 "data_offset": 256, 00:22:14.252 "data_size": 7936 00:22:14.252 } 00:22:14.252 ] 00:22:14.252 }' 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.252 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.510 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.511 [2024-11-26 20:35:07.828506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:14.511 [2024-11-26 20:35:07.828683] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:14.511 [2024-11-26 20:35:07.828703] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:14.511 request: 00:22:14.511 { 00:22:14.511 "base_bdev": "BaseBdev1", 00:22:14.511 "raid_bdev": "raid_bdev1", 00:22:14.511 "method": "bdev_raid_add_base_bdev", 00:22:14.511 "req_id": 1 00:22:14.511 } 00:22:14.511 Got JSON-RPC error response 00:22:14.511 response: 00:22:14.511 { 00:22:14.511 "code": -22, 00:22:14.511 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:14.511 } 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.511 20:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.447 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.448 "name": "raid_bdev1", 00:22:15.448 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:15.448 "strip_size_kb": 0, 00:22:15.448 "state": "online", 00:22:15.448 "raid_level": "raid1", 00:22:15.448 "superblock": true, 00:22:15.448 "num_base_bdevs": 2, 00:22:15.448 "num_base_bdevs_discovered": 1, 00:22:15.448 "num_base_bdevs_operational": 1, 00:22:15.448 "base_bdevs_list": [ 00:22:15.448 { 00:22:15.448 "name": null, 00:22:15.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.448 "is_configured": false, 00:22:15.448 "data_offset": 0, 00:22:15.448 "data_size": 7936 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "name": "BaseBdev2", 00:22:15.448 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:15.448 "is_configured": true, 00:22:15.448 "data_offset": 256, 00:22:15.448 "data_size": 7936 00:22:15.448 } 00:22:15.448 ] 00:22:15.448 }' 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.448 20:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.708 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:15.708 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:15.708 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:15.708 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:15.708 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:15.967 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.967 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.967 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.967 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.967 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.967 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.967 "name": "raid_bdev1", 00:22:15.967 "uuid": "2d562af0-c90c-4043-aad5-3b7dbc82a680", 00:22:15.967 "strip_size_kb": 0, 00:22:15.967 "state": "online", 00:22:15.967 "raid_level": "raid1", 00:22:15.967 "superblock": true, 00:22:15.967 "num_base_bdevs": 2, 00:22:15.967 "num_base_bdevs_discovered": 1, 00:22:15.967 "num_base_bdevs_operational": 1, 00:22:15.967 "base_bdevs_list": [ 00:22:15.967 { 00:22:15.967 "name": null, 00:22:15.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.967 "is_configured": false, 00:22:15.967 "data_offset": 0, 00:22:15.967 "data_size": 7936 00:22:15.967 }, 00:22:15.967 { 00:22:15.967 "name": "BaseBdev2", 00:22:15.967 "uuid": "032fafc3-657b-5b2e-8c72-a64905080e6c", 00:22:15.967 "is_configured": true, 00:22:15.967 "data_offset": 256, 00:22:15.967 "data_size": 7936 00:22:15.967 } 00:22:15.967 ] 00:22:15.967 }' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89529 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89529 ']' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89529 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89529 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.968 killing process with pid 89529 00:22:15.968 Received shutdown signal, test time was about 60.000000 seconds 00:22:15.968 00:22:15.968 Latency(us) 00:22:15.968 [2024-11-26T20:35:09.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.968 [2024-11-26T20:35:09.523Z] =================================================================================================================== 00:22:15.968 [2024-11-26T20:35:09.523Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89529' 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89529 00:22:15.968 [2024-11-26 20:35:09.443815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.968 20:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89529 00:22:15.968 [2024-11-26 20:35:09.443956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.968 [2024-11-26 20:35:09.444009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.968 [2024-11-26 20:35:09.444022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:16.229 [2024-11-26 20:35:09.777535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:17.636 20:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:22:17.636 00:22:17.636 real 0m17.712s 00:22:17.636 user 0m23.185s 00:22:17.636 sys 0m1.510s 00:22:17.636 20:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.636 20:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.636 ************************************ 00:22:17.636 END TEST raid_rebuild_test_sb_md_interleaved 00:22:17.636 ************************************ 00:22:17.636 20:35:11 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:22:17.636 20:35:11 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:22:17.636 20:35:11 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89529 ']' 00:22:17.636 20:35:11 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89529 00:22:17.636 20:35:11 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:22:17.636 ************************************ 00:22:17.636 END TEST bdev_raid 00:22:17.636 ************************************ 00:22:17.636 00:22:17.636 real 12m33.831s 00:22:17.636 user 17m0.130s 00:22:17.636 sys 1m54.764s 00:22:17.636 20:35:11 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.636 20:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:17.636 20:35:11 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:17.636 20:35:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.636 20:35:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.636 20:35:11 -- common/autotest_common.sh@10 -- # set +x 00:22:17.636 ************************************ 00:22:17.636 START TEST spdkcli_raid 00:22:17.636 ************************************ 00:22:17.636 20:35:11 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:17.896 * Looking for test storage... 00:22:17.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.896 20:35:11 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.896 --rc genhtml_branch_coverage=1 00:22:17.896 --rc genhtml_function_coverage=1 00:22:17.896 --rc genhtml_legend=1 00:22:17.896 --rc geninfo_all_blocks=1 00:22:17.896 --rc geninfo_unexecuted_blocks=1 00:22:17.896 00:22:17.896 ' 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.896 --rc genhtml_branch_coverage=1 00:22:17.896 --rc genhtml_function_coverage=1 00:22:17.896 --rc genhtml_legend=1 00:22:17.896 --rc geninfo_all_blocks=1 00:22:17.896 --rc geninfo_unexecuted_blocks=1 00:22:17.896 00:22:17.896 ' 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.896 --rc genhtml_branch_coverage=1 00:22:17.896 --rc genhtml_function_coverage=1 00:22:17.896 --rc genhtml_legend=1 00:22:17.896 --rc geninfo_all_blocks=1 00:22:17.896 --rc geninfo_unexecuted_blocks=1 00:22:17.896 00:22:17.896 ' 00:22:17.896 20:35:11 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.896 --rc genhtml_branch_coverage=1 00:22:17.896 --rc genhtml_function_coverage=1 00:22:17.896 --rc genhtml_legend=1 00:22:17.896 --rc geninfo_all_blocks=1 00:22:17.896 --rc geninfo_unexecuted_blocks=1 00:22:17.896 00:22:17.896 ' 00:22:17.896 20:35:11 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:17.896 20:35:11 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:17.896 20:35:11 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:17.896 20:35:11 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:17.896 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:17.897 20:35:11 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90206 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:17.897 20:35:11 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90206 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90206 ']' 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.897 20:35:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.155 [2024-11-26 20:35:11.502496] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:22:18.155 [2024-11-26 20:35:11.502695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90206 ] 00:22:18.155 [2024-11-26 20:35:11.680707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:18.413 [2024-11-26 20:35:11.806474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.413 [2024-11-26 20:35:11.806509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.349 20:35:12 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.349 20:35:12 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:22:19.349 20:35:12 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:19.349 20:35:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.349 20:35:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.349 20:35:12 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:19.349 20:35:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.349 20:35:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.349 20:35:12 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:19.349 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:19.349 ' 00:22:21.271 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:21.271 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:21.271 20:35:14 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:21.271 20:35:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.271 20:35:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.271 20:35:14 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:21.271 20:35:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.272 20:35:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.272 20:35:14 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:21.272 ' 00:22:22.211 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:22.211 20:35:15 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:22.211 20:35:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.211 20:35:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.211 20:35:15 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:22.211 20:35:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.211 20:35:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.211 20:35:15 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:22.211 20:35:15 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:22.824 20:35:16 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:22.824 20:35:16 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:22.824 20:35:16 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:22.824 20:35:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.824 20:35:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.824 20:35:16 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:22.824 20:35:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.824 20:35:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.824 20:35:16 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:22.824 ' 00:22:23.765 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:24.025 20:35:17 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:24.025 20:35:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.025 20:35:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.025 20:35:17 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:24.025 20:35:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.025 20:35:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.025 20:35:17 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:24.025 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:24.025 ' 00:22:25.404 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:25.404 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:25.663 20:35:19 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:25.663 20:35:19 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90206 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90206 ']' 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90206 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90206 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90206' 00:22:25.663 killing process with pid 90206 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90206 00:22:25.663 20:35:19 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90206 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90206 ']' 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90206 00:22:28.992 20:35:21 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90206 ']' 00:22:28.992 20:35:21 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90206 00:22:28.992 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90206) - No such process 00:22:28.992 20:35:21 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90206 is not found' 00:22:28.992 Process with pid 90206 is not found 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:28.992 20:35:21 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:28.992 00:22:28.992 real 0m10.720s 00:22:28.992 user 0m22.118s 00:22:28.992 sys 0m1.128s 00:22:28.992 20:35:21 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.992 20:35:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.992 ************************************ 00:22:28.992 END TEST spdkcli_raid 00:22:28.992 ************************************ 00:22:28.992 20:35:21 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:28.992 20:35:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.992 20:35:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.992 20:35:21 -- common/autotest_common.sh@10 -- # set +x 00:22:28.992 ************************************ 00:22:28.992 START TEST blockdev_raid5f 00:22:28.992 ************************************ 00:22:28.992 20:35:21 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:28.992 * Looking for test storage... 00:22:28.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:28.992 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:28.992 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:22:28.992 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:28.992 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.992 20:35:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:28.993 20:35:22 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.993 20:35:22 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.993 20:35:22 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.993 20:35:22 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.993 --rc genhtml_branch_coverage=1 00:22:28.993 --rc genhtml_function_coverage=1 00:22:28.993 --rc genhtml_legend=1 00:22:28.993 --rc geninfo_all_blocks=1 00:22:28.993 --rc geninfo_unexecuted_blocks=1 00:22:28.993 00:22:28.993 ' 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.993 --rc genhtml_branch_coverage=1 00:22:28.993 --rc genhtml_function_coverage=1 00:22:28.993 --rc genhtml_legend=1 00:22:28.993 --rc geninfo_all_blocks=1 00:22:28.993 --rc geninfo_unexecuted_blocks=1 00:22:28.993 00:22:28.993 ' 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.993 --rc genhtml_branch_coverage=1 00:22:28.993 --rc genhtml_function_coverage=1 00:22:28.993 --rc genhtml_legend=1 00:22:28.993 --rc geninfo_all_blocks=1 00:22:28.993 --rc geninfo_unexecuted_blocks=1 00:22:28.993 00:22:28.993 ' 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.993 --rc genhtml_branch_coverage=1 00:22:28.993 --rc genhtml_function_coverage=1 00:22:28.993 --rc genhtml_legend=1 00:22:28.993 --rc geninfo_all_blocks=1 00:22:28.993 --rc geninfo_unexecuted_blocks=1 00:22:28.993 00:22:28.993 ' 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90494 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:28.993 20:35:22 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90494 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90494 ']' 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.993 20:35:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:28.993 [2024-11-26 20:35:22.285870] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:22:28.993 [2024-11-26 20:35:22.286091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90494 ] 00:22:28.993 [2024-11-26 20:35:22.466096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.253 [2024-11-26 20:35:22.586789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.191 20:35:23 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.191 20:35:23 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:22:30.192 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:30.192 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:22:30.192 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:22:30.192 20:35:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.192 20:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.192 Malloc0 00:22:30.192 Malloc1 00:22:30.192 Malloc2 00:22:30.192 20:35:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.192 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:30.192 20:35:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.192 20:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "778cab64-39cb-43e3-9175-fa43bd0df145"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "778cab64-39cb-43e3-9175-fa43bd0df145",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "778cab64-39cb-43e3-9175-fa43bd0df145",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "464c37eb-0c23-43a8-bcc9-01af25c77165",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4d0935ca-447f-4d03-98c8-08c3db72da5c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1b3f3b08-2e5f-439f-978a-c8cd369ec80c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:30.452 20:35:23 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90494 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90494 ']' 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90494 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90494 00:22:30.452 killing process with pid 90494 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90494' 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90494 00:22:30.452 20:35:23 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90494 00:22:33.759 20:35:27 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:33.759 20:35:27 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:33.759 20:35:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:33.759 20:35:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.759 20:35:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:33.759 ************************************ 00:22:33.759 START TEST bdev_hello_world 00:22:33.759 ************************************ 00:22:33.759 20:35:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:33.759 [2024-11-26 20:35:27.219396] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:22:33.759 [2024-11-26 20:35:27.219619] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90567 ] 00:22:34.018 [2024-11-26 20:35:27.400971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.018 [2024-11-26 20:35:27.537994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.954 [2024-11-26 20:35:28.140543] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:34.954 [2024-11-26 20:35:28.140717] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:34.954 [2024-11-26 20:35:28.140768] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:34.954 [2024-11-26 20:35:28.141421] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:34.954 [2024-11-26 20:35:28.141691] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:34.954 [2024-11-26 20:35:28.141761] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:34.954 [2024-11-26 20:35:28.141862] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:34.954 00:22:34.954 [2024-11-26 20:35:28.141937] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:36.329 00:22:36.329 ************************************ 00:22:36.329 END TEST bdev_hello_world 00:22:36.329 ************************************ 00:22:36.329 real 0m2.725s 00:22:36.329 user 0m2.327s 00:22:36.329 sys 0m0.270s 00:22:36.329 20:35:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.329 20:35:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:36.588 20:35:29 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:36.588 20:35:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.588 20:35:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.588 20:35:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:36.588 ************************************ 00:22:36.588 START TEST bdev_bounds 00:22:36.588 ************************************ 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90609 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90609' 00:22:36.588 Process bdevio pid: 90609 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90609 00:22:36.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90609 ']' 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.588 20:35:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:36.588 [2024-11-26 20:35:30.021164] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:22:36.588 [2024-11-26 20:35:30.021422] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90609 ] 00:22:36.847 [2024-11-26 20:35:30.191166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.847 [2024-11-26 20:35:30.332218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.847 [2024-11-26 20:35:30.332331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.847 [2024-11-26 20:35:30.332362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.786 20:35:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.786 20:35:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:37.786 20:35:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:37.786 I/O targets: 00:22:37.786 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:37.786 00:22:37.786 00:22:37.786 CUnit - A unit testing framework for C - Version 2.1-3 00:22:37.786 http://cunit.sourceforge.net/ 00:22:37.786 00:22:37.786 00:22:37.786 Suite: bdevio tests on: raid5f 00:22:37.786 Test: blockdev write read block ...passed 00:22:37.786 Test: blockdev write zeroes read block ...passed 00:22:37.786 Test: blockdev write zeroes read no split ...passed 00:22:37.786 Test: blockdev write zeroes read split ...passed 00:22:38.046 Test: blockdev write zeroes read split partial ...passed 00:22:38.046 Test: blockdev reset ...passed 00:22:38.046 Test: blockdev write read 8 blocks ...passed 00:22:38.046 Test: blockdev write read size > 128k ...passed 00:22:38.046 Test: blockdev write read invalid size ...passed 00:22:38.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:38.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:38.046 Test: blockdev write read max offset ...passed 00:22:38.046 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:38.046 Test: blockdev writev readv 8 blocks ...passed 00:22:38.046 Test: blockdev writev readv 30 x 1block ...passed 00:22:38.046 Test: blockdev writev readv block ...passed 00:22:38.046 Test: blockdev writev readv size > 128k ...passed 00:22:38.046 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:38.046 Test: blockdev comparev and writev ...passed 00:22:38.046 Test: blockdev nvme passthru rw ...passed 00:22:38.046 Test: blockdev nvme passthru vendor specific ...passed 00:22:38.046 Test: blockdev nvme admin passthru ...passed 00:22:38.046 Test: blockdev copy ...passed 00:22:38.046 00:22:38.046 Run Summary: Type Total Ran Passed Failed Inactive 00:22:38.046 suites 1 1 n/a 0 0 00:22:38.046 tests 23 23 23 0 0 00:22:38.046 asserts 130 130 130 0 n/a 00:22:38.046 00:22:38.046 Elapsed time = 0.721 seconds 00:22:38.046 0 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90609 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90609 ']' 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90609 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90609 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90609' 00:22:38.046 killing process with pid 90609 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90609 00:22:38.046 20:35:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90609 00:22:39.955 20:35:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:39.955 00:22:39.955 real 0m3.216s 00:22:39.955 user 0m8.159s 00:22:39.955 sys 0m0.389s 00:22:39.955 20:35:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.955 20:35:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:39.955 ************************************ 00:22:39.955 END TEST bdev_bounds 00:22:39.955 ************************************ 00:22:39.955 20:35:33 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:39.955 20:35:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:39.955 20:35:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.955 20:35:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:39.955 ************************************ 00:22:39.955 START TEST bdev_nbd 00:22:39.955 ************************************ 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90674 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90674 /var/tmp/spdk-nbd.sock 00:22:39.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90674 ']' 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.955 20:35:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:39.955 [2024-11-26 20:35:33.315984] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:22:39.955 [2024-11-26 20:35:33.316111] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.955 [2024-11-26 20:35:33.499354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.215 [2024-11-26 20:35:33.639016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:40.785 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:41.045 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:41.045 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:41.305 1+0 records in 00:22:41.305 1+0 records out 00:22:41.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592167 s, 6.9 MB/s 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:41.305 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:41.564 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:41.564 { 00:22:41.564 "nbd_device": "/dev/nbd0", 00:22:41.564 "bdev_name": "raid5f" 00:22:41.564 } 00:22:41.564 ]' 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:41.565 { 00:22:41.565 "nbd_device": "/dev/nbd0", 00:22:41.565 "bdev_name": "raid5f" 00:22:41.565 } 00:22:41.565 ]' 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:41.565 20:35:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:41.824 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:41.825 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:41.825 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.084 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:42.345 /dev/nbd0 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.345 1+0 records in 00:22:42.345 1+0 records out 00:22:42.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699786 s, 5.9 MB/s 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:42.345 20:35:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:42.605 { 00:22:42.605 "nbd_device": "/dev/nbd0", 00:22:42.605 "bdev_name": "raid5f" 00:22:42.605 } 00:22:42.605 ]' 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:42.605 { 00:22:42.605 "nbd_device": "/dev/nbd0", 00:22:42.605 "bdev_name": "raid5f" 00:22:42.605 } 00:22:42.605 ]' 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:42.605 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:42.866 256+0 records in 00:22:42.866 256+0 records out 00:22:42.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131858 s, 79.5 MB/s 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:42.866 256+0 records in 00:22:42.866 256+0 records out 00:22:42.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363925 s, 28.8 MB/s 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:42.866 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.867 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:43.127 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:43.387 20:35:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:43.647 malloc_lvol_verify 00:22:43.647 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:43.908 3d5f57ea-e500-4cbf-a12b-dca43be30fa3 00:22:43.908 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:44.166 2ba6bd10-19a3-4b4b-ac4a-97ab72ce5fb3 00:22:44.166 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:44.425 /dev/nbd0 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:44.425 mke2fs 1.47.0 (5-Feb-2023) 00:22:44.425 Discarding device blocks: 0/4096 done 00:22:44.425 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:44.425 00:22:44.425 Allocating group tables: 0/1 done 00:22:44.425 Writing inode tables: 0/1 done 00:22:44.425 Creating journal (1024 blocks): done 00:22:44.425 Writing superblocks and filesystem accounting information: 0/1 done 00:22:44.425 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:44.425 20:35:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:44.684 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90674 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90674 ']' 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90674 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90674 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90674' 00:22:44.685 killing process with pid 90674 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90674 00:22:44.685 20:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90674 00:22:46.596 20:35:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:46.596 00:22:46.596 real 0m6.814s 00:22:46.596 user 0m9.323s 00:22:46.596 sys 0m1.485s 00:22:46.596 20:35:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.596 20:35:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:46.596 ************************************ 00:22:46.596 END TEST bdev_nbd 00:22:46.596 ************************************ 00:22:46.596 20:35:40 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:46.596 20:35:40 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:22:46.596 20:35:40 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:22:46.596 20:35:40 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:46.596 20:35:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.596 20:35:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.596 20:35:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:46.596 ************************************ 00:22:46.596 START TEST bdev_fio 00:22:46.596 ************************************ 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:46.596 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:46.596 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:46.855 ************************************ 00:22:46.855 START TEST bdev_fio_rw_verify 00:22:46.855 ************************************ 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:46.855 20:35:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:47.112 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:47.112 fio-3.35 00:22:47.112 Starting 1 thread 00:22:59.393 00:22:59.393 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90888: Tue Nov 26 20:35:51 2024 00:22:59.393 read: IOPS=9574, BW=37.4MiB/s (39.2MB/s)(374MiB/10001msec) 00:22:59.393 slat (usec): min=18, max=1711, avg=25.15, stdev= 6.63 00:22:59.393 clat (usec): min=12, max=2020, avg=166.37, stdev=63.04 00:22:59.393 lat (usec): min=36, max=2045, avg=191.53, stdev=64.35 00:22:59.393 clat percentiles (usec): 00:22:59.393 | 50.000th=[ 167], 99.000th=[ 302], 99.900th=[ 326], 99.990th=[ 379], 00:22:59.393 | 99.999th=[ 2024] 00:22:59.393 write: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(386MiB/9884msec); 0 zone resets 00:22:59.393 slat (usec): min=8, max=253, avg=21.35, stdev= 5.00 00:22:59.393 clat (usec): min=80, max=1675, avg=382.01, stdev=63.22 00:22:59.393 lat (usec): min=100, max=1929, avg=403.36, stdev=65.41 00:22:59.393 clat percentiles (usec): 00:22:59.393 | 50.000th=[ 383], 99.000th=[ 519], 99.900th=[ 635], 99.990th=[ 1037], 00:22:59.393 | 99.999th=[ 1680] 00:22:59.393 bw ( KiB/s): min=33720, max=45896, per=99.39%, avg=39788.63, stdev=3633.37, samples=19 00:22:59.393 iops : min= 8430, max=11474, avg=9947.16, stdev=908.34, samples=19 00:22:59.393 lat (usec) : 20=0.01%, 50=0.01%, 100=9.30%, 250=35.41%, 500=53.97% 00:22:59.393 lat (usec) : 750=1.30%, 1000=0.02% 00:22:59.393 lat (msec) : 2=0.01%, 4=0.01% 00:22:59.393 cpu : usr=99.01%, sys=0.34%, ctx=19, majf=0, minf=8129 00:22:59.393 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:59.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.393 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.393 issued rwts: total=95751,98918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.393 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:59.393 00:22:59.393 Run status group 0 (all jobs): 00:22:59.393 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=374MiB (392MB), run=10001-10001msec 00:22:59.393 WRITE: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=386MiB (405MB), run=9884-9884msec 00:22:59.961 ----------------------------------------------------- 00:22:59.961 Suppressions used: 00:22:59.961 count bytes template 00:22:59.961 1 7 /usr/src/fio/parse.c 00:22:59.961 202 19392 /usr/src/fio/iolog.c 00:22:59.961 1 8 libtcmalloc_minimal.so 00:22:59.961 1 904 libcrypto.so 00:22:59.961 ----------------------------------------------------- 00:22:59.961 00:22:59.961 00:22:59.961 real 0m13.102s 00:22:59.961 user 0m13.073s 00:22:59.961 sys 0m0.672s 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:59.961 ************************************ 00:22:59.961 END TEST bdev_fio_rw_verify 00:22:59.961 ************************************ 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "778cab64-39cb-43e3-9175-fa43bd0df145"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "778cab64-39cb-43e3-9175-fa43bd0df145",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "778cab64-39cb-43e3-9175-fa43bd0df145",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "464c37eb-0c23-43a8-bcc9-01af25c77165",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4d0935ca-447f-4d03-98c8-08c3db72da5c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1b3f3b08-2e5f-439f-978a-c8cd369ec80c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:59.961 /home/vagrant/spdk_repo/spdk 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:59.961 00:22:59.961 real 0m13.376s 00:22:59.961 user 0m13.200s 00:22:59.961 sys 0m0.795s 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.961 20:35:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:59.961 ************************************ 00:22:59.961 END TEST bdev_fio 00:22:59.961 ************************************ 00:23:00.221 20:35:53 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:00.221 20:35:53 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:00.221 20:35:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:00.221 20:35:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.222 20:35:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:00.222 ************************************ 00:23:00.222 START TEST bdev_verify 00:23:00.222 ************************************ 00:23:00.222 20:35:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:00.222 [2024-11-26 20:35:53.619386] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:23:00.222 [2024-11-26 20:35:53.619495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91052 ] 00:23:00.480 [2024-11-26 20:35:53.801115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:00.480 [2024-11-26 20:35:53.925938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.481 [2024-11-26 20:35:53.925968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.048 Running I/O for 5 seconds... 00:23:03.358 11740.00 IOPS, 45.86 MiB/s [2024-11-26T20:35:57.849Z] 12650.00 IOPS, 49.41 MiB/s [2024-11-26T20:35:58.785Z] 12654.67 IOPS, 49.43 MiB/s [2024-11-26T20:35:59.807Z] 12146.50 IOPS, 47.45 MiB/s [2024-11-26T20:35:59.807Z] 11909.40 IOPS, 46.52 MiB/s 00:23:06.252 Latency(us) 00:23:06.252 [2024-11-26T20:35:59.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.252 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:06.252 Verification LBA range: start 0x0 length 0x2000 00:23:06.252 raid5f : 5.01 5956.20 23.27 0.00 0.00 32202.88 284.39 29534.13 00:23:06.252 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:06.252 Verification LBA range: start 0x2000 length 0x2000 00:23:06.252 raid5f : 5.02 5967.41 23.31 0.00 0.00 32199.13 268.30 29534.13 00:23:06.252 [2024-11-26T20:35:59.807Z] =================================================================================================================== 00:23:06.252 [2024-11-26T20:35:59.807Z] Total : 11923.61 46.58 0.00 0.00 32201.00 268.30 29534.13 00:23:07.631 00:23:07.632 real 0m7.648s 00:23:07.632 user 0m14.116s 00:23:07.632 sys 0m0.276s 00:23:07.632 20:36:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.632 20:36:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:07.632 ************************************ 00:23:07.632 END TEST bdev_verify 00:23:07.632 ************************************ 00:23:07.891 20:36:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:07.891 20:36:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:07.891 20:36:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.891 20:36:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:07.891 ************************************ 00:23:07.891 START TEST bdev_verify_big_io 00:23:07.891 ************************************ 00:23:07.891 20:36:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:07.891 [2024-11-26 20:36:01.335316] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:23:07.891 [2024-11-26 20:36:01.335457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91156 ] 00:23:08.150 [2024-11-26 20:36:01.515859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:08.150 [2024-11-26 20:36:01.642031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.150 [2024-11-26 20:36:01.642066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.716 Running I/O for 5 seconds... 00:23:11.029 506.00 IOPS, 31.62 MiB/s [2024-11-26T20:36:05.519Z] 600.50 IOPS, 37.53 MiB/s [2024-11-26T20:36:06.463Z] 612.67 IOPS, 38.29 MiB/s [2024-11-26T20:36:07.401Z] 634.50 IOPS, 39.66 MiB/s [2024-11-26T20:36:07.659Z] 659.60 IOPS, 41.23 MiB/s 00:23:14.104 Latency(us) 00:23:14.104 [2024-11-26T20:36:07.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.104 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:14.104 Verification LBA range: start 0x0 length 0x200 00:23:14.105 raid5f : 5.34 332.41 20.78 0.00 0.00 9440610.14 179.76 424925.12 00:23:14.105 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:14.105 Verification LBA range: start 0x200 length 0x200 00:23:14.105 raid5f : 5.33 333.18 20.82 0.00 0.00 9389875.99 293.34 424925.12 00:23:14.105 [2024-11-26T20:36:07.660Z] =================================================================================================================== 00:23:14.105 [2024-11-26T20:36:07.660Z] Total : 665.58 41.60 0.00 0.00 9415243.06 179.76 424925.12 00:23:16.008 00:23:16.008 real 0m7.981s 00:23:16.008 user 0m14.778s 00:23:16.008 sys 0m0.277s 00:23:16.008 20:36:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.008 20:36:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:16.008 ************************************ 00:23:16.008 END TEST bdev_verify_big_io 00:23:16.008 ************************************ 00:23:16.008 20:36:09 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:16.008 20:36:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:16.008 20:36:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.008 20:36:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:16.008 ************************************ 00:23:16.008 START TEST bdev_write_zeroes 00:23:16.008 ************************************ 00:23:16.008 20:36:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:16.008 [2024-11-26 20:36:09.380701] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:23:16.008 [2024-11-26 20:36:09.380835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91259 ] 00:23:16.008 [2024-11-26 20:36:09.557148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.267 [2024-11-26 20:36:09.682116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.836 Running I/O for 1 seconds... 00:23:17.804 23127.00 IOPS, 90.34 MiB/s 00:23:17.804 Latency(us) 00:23:17.804 [2024-11-26T20:36:11.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.804 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:17.804 raid5f : 1.01 23111.99 90.28 0.00 0.00 5518.04 1874.50 8642.74 00:23:17.804 [2024-11-26T20:36:11.359Z] =================================================================================================================== 00:23:17.804 [2024-11-26T20:36:11.360Z] Total : 23111.99 90.28 0.00 0.00 5518.04 1874.50 8642.74 00:23:19.712 00:23:19.712 real 0m3.693s 00:23:19.713 user 0m3.270s 00:23:19.713 sys 0m0.290s 00:23:19.713 20:36:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.713 20:36:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 ************************************ 00:23:19.713 END TEST bdev_write_zeroes 00:23:19.713 ************************************ 00:23:19.713 20:36:13 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:19.713 20:36:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:19.713 20:36:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.713 20:36:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 ************************************ 00:23:19.713 START TEST bdev_json_nonenclosed 00:23:19.713 ************************************ 00:23:19.713 20:36:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:19.713 [2024-11-26 20:36:13.130102] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:23:19.713 [2024-11-26 20:36:13.130234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91314 ] 00:23:19.972 [2024-11-26 20:36:13.310415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.972 [2024-11-26 20:36:13.457080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.972 [2024-11-26 20:36:13.457223] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:19.972 [2024-11-26 20:36:13.457282] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:19.972 [2024-11-26 20:36:13.457297] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:20.231 00:23:20.231 real 0m0.718s 00:23:20.231 user 0m0.467s 00:23:20.231 sys 0m0.145s 00:23:20.231 20:36:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.231 20:36:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:20.231 ************************************ 00:23:20.231 END TEST bdev_json_nonenclosed 00:23:20.231 ************************************ 00:23:20.491 20:36:13 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:20.491 20:36:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:20.491 20:36:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.491 20:36:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.491 ************************************ 00:23:20.491 START TEST bdev_json_nonarray 00:23:20.491 ************************************ 00:23:20.491 20:36:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:20.491 [2024-11-26 20:36:13.930932] Starting SPDK v25.01-pre git sha1 0836dccda / DPDK 24.03.0 initialization... 00:23:20.491 [2024-11-26 20:36:13.931080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91340 ] 00:23:20.750 [2024-11-26 20:36:14.113500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.750 [2024-11-26 20:36:14.266447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.750 [2024-11-26 20:36:14.266599] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:20.750 [2024-11-26 20:36:14.266629] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:20.750 [2024-11-26 20:36:14.266652] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.319 00:23:21.319 real 0m0.735s 00:23:21.319 user 0m0.463s 00:23:21.319 sys 0m0.165s 00:23:21.319 20:36:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.319 20:36:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:21.319 ************************************ 00:23:21.319 END TEST bdev_json_nonarray 00:23:21.319 ************************************ 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:23:21.319 20:36:14 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:23:21.319 00:23:21.319 real 0m52.689s 00:23:21.319 user 1m11.400s 00:23:21.319 sys 0m5.190s 00:23:21.319 20:36:14 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.319 20:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:21.319 ************************************ 00:23:21.319 END TEST blockdev_raid5f 00:23:21.319 ************************************ 00:23:21.319 20:36:14 -- spdk/autotest.sh@194 -- # uname -s 00:23:21.319 20:36:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:21.319 20:36:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.319 20:36:14 -- common/autotest_common.sh@10 -- # set +x 00:23:21.319 20:36:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:21.319 20:36:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:21.319 20:36:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:21.319 20:36:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:21.319 20:36:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.319 20:36:14 -- common/autotest_common.sh@10 -- # set +x 00:23:21.319 20:36:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:21.319 20:36:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:21.319 20:36:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:21.319 20:36:14 -- common/autotest_common.sh@10 -- # set +x 00:23:23.227 INFO: APP EXITING 00:23:23.227 INFO: killing all VMs 00:23:23.227 INFO: killing vhost app 00:23:23.227 INFO: EXIT DONE 00:23:23.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:23.796 Waiting for block devices as requested 00:23:23.796 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:24.055 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:24.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:24.992 Cleaning 00:23:24.992 Removing: /var/run/dpdk/spdk0/config 00:23:24.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:24.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:24.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:24.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:24.992 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:24.993 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:24.993 Removing: /dev/shm/spdk_tgt_trace.pid57001 00:23:24.993 Removing: /var/run/dpdk/spdk0 00:23:24.993 Removing: /var/run/dpdk/spdk_pid56744 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57001 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57230 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57334 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57390 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57529 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57553 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57763 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57882 00:23:24.993 Removing: /var/run/dpdk/spdk_pid57999 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58126 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58240 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58279 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58316 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58392 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58520 00:23:24.993 Removing: /var/run/dpdk/spdk_pid58980 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59061 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59146 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59162 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59332 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59352 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59507 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59527 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59598 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59626 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59691 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59715 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59921 00:23:24.993 Removing: /var/run/dpdk/spdk_pid59963 00:23:24.993 Removing: /var/run/dpdk/spdk_pid60046 00:23:24.993 Removing: /var/run/dpdk/spdk_pid61427 00:23:24.993 Removing: /var/run/dpdk/spdk_pid61644 00:23:24.993 Removing: /var/run/dpdk/spdk_pid61790 00:23:24.993 Removing: /var/run/dpdk/spdk_pid62450 00:23:24.993 Removing: /var/run/dpdk/spdk_pid62667 00:23:24.993 Removing: /var/run/dpdk/spdk_pid62807 00:23:24.993 Removing: /var/run/dpdk/spdk_pid63472 00:23:24.993 Removing: /var/run/dpdk/spdk_pid63808 00:23:24.993 Removing: /var/run/dpdk/spdk_pid63954 00:23:24.993 Removing: /var/run/dpdk/spdk_pid65361 00:23:24.993 Removing: /var/run/dpdk/spdk_pid65625 00:23:24.993 Removing: /var/run/dpdk/spdk_pid65767 00:23:24.993 Removing: /var/run/dpdk/spdk_pid67156 00:23:24.993 Removing: /var/run/dpdk/spdk_pid67409 00:23:25.252 Removing: /var/run/dpdk/spdk_pid67560 00:23:25.252 Removing: /var/run/dpdk/spdk_pid68951 00:23:25.252 Removing: /var/run/dpdk/spdk_pid69397 00:23:25.252 Removing: /var/run/dpdk/spdk_pid69537 00:23:25.252 Removing: /var/run/dpdk/spdk_pid71044 00:23:25.252 Removing: /var/run/dpdk/spdk_pid71315 00:23:25.252 Removing: /var/run/dpdk/spdk_pid71461 00:23:25.252 Removing: /var/run/dpdk/spdk_pid72963 00:23:25.252 Removing: /var/run/dpdk/spdk_pid73234 00:23:25.252 Removing: /var/run/dpdk/spdk_pid73375 00:23:25.252 Removing: /var/run/dpdk/spdk_pid74876 00:23:25.252 Removing: /var/run/dpdk/spdk_pid75369 00:23:25.252 Removing: /var/run/dpdk/spdk_pid75515 00:23:25.252 Removing: /var/run/dpdk/spdk_pid75664 00:23:25.252 Removing: /var/run/dpdk/spdk_pid76093 00:23:25.252 Removing: /var/run/dpdk/spdk_pid76842 00:23:25.252 Removing: /var/run/dpdk/spdk_pid77239 00:23:25.252 Removing: /var/run/dpdk/spdk_pid77938 00:23:25.252 Removing: /var/run/dpdk/spdk_pid78407 00:23:25.252 Removing: /var/run/dpdk/spdk_pid79178 00:23:25.252 Removing: /var/run/dpdk/spdk_pid79597 00:23:25.252 Removing: /var/run/dpdk/spdk_pid81572 00:23:25.252 Removing: /var/run/dpdk/spdk_pid82024 00:23:25.252 Removing: /var/run/dpdk/spdk_pid82472 00:23:25.252 Removing: /var/run/dpdk/spdk_pid84587 00:23:25.252 Removing: /var/run/dpdk/spdk_pid85074 00:23:25.252 Removing: /var/run/dpdk/spdk_pid85597 00:23:25.252 Removing: /var/run/dpdk/spdk_pid86661 00:23:25.252 Removing: /var/run/dpdk/spdk_pid86985 00:23:25.252 Removing: /var/run/dpdk/spdk_pid87929 00:23:25.252 Removing: /var/run/dpdk/spdk_pid88252 00:23:25.252 Removing: /var/run/dpdk/spdk_pid89200 00:23:25.252 Removing: /var/run/dpdk/spdk_pid89529 00:23:25.252 Removing: /var/run/dpdk/spdk_pid90206 00:23:25.252 Removing: /var/run/dpdk/spdk_pid90494 00:23:25.252 Removing: /var/run/dpdk/spdk_pid90567 00:23:25.252 Removing: /var/run/dpdk/spdk_pid90609 00:23:25.252 Removing: /var/run/dpdk/spdk_pid90873 00:23:25.252 Removing: /var/run/dpdk/spdk_pid91052 00:23:25.252 Removing: /var/run/dpdk/spdk_pid91156 00:23:25.252 Removing: /var/run/dpdk/spdk_pid91259 00:23:25.252 Removing: /var/run/dpdk/spdk_pid91314 00:23:25.252 Removing: /var/run/dpdk/spdk_pid91340 00:23:25.252 Clean 00:23:25.252 20:36:18 -- common/autotest_common.sh@1453 -- # return 0 00:23:25.252 20:36:18 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:25.252 20:36:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.252 20:36:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.512 20:36:18 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:25.513 20:36:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.513 20:36:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.513 20:36:18 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:25.513 20:36:18 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:25.513 20:36:18 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:25.513 20:36:18 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:25.513 20:36:18 -- spdk/autotest.sh@398 -- # hostname 00:23:25.513 20:36:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:25.788 geninfo: WARNING: invalid characters removed from testname! 00:23:52.338 20:36:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:53.325 20:36:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:55.863 20:36:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:58.415 20:36:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:00.984 20:36:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:03.523 20:36:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:06.081 20:36:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:06.081 20:36:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:06.081 20:36:59 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:06.081 20:36:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:06.081 20:36:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:06.081 20:36:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:06.081 + [[ -n 5433 ]] 00:24:06.081 + sudo kill 5433 00:24:06.091 [Pipeline] } 00:24:06.107 [Pipeline] // timeout 00:24:06.112 [Pipeline] } 00:24:06.130 [Pipeline] // stage 00:24:06.137 [Pipeline] } 00:24:06.153 [Pipeline] // catchError 00:24:06.163 [Pipeline] stage 00:24:06.165 [Pipeline] { (Stop VM) 00:24:06.178 [Pipeline] sh 00:24:06.458 + vagrant halt 00:24:09.747 ==> default: Halting domain... 00:24:16.353 [Pipeline] sh 00:24:16.639 + vagrant destroy -f 00:24:19.210 ==> default: Removing domain... 00:24:19.537 [Pipeline] sh 00:24:19.822 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:24:19.832 [Pipeline] } 00:24:19.848 [Pipeline] // stage 00:24:19.855 [Pipeline] } 00:24:19.871 [Pipeline] // dir 00:24:19.879 [Pipeline] } 00:24:19.895 [Pipeline] // wrap 00:24:19.902 [Pipeline] } 00:24:19.917 [Pipeline] // catchError 00:24:19.928 [Pipeline] stage 00:24:19.931 [Pipeline] { (Epilogue) 00:24:19.943 [Pipeline] sh 00:24:20.234 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:25.522 [Pipeline] catchError 00:24:25.524 [Pipeline] { 00:24:25.536 [Pipeline] sh 00:24:25.820 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:25.820 Artifacts sizes are good 00:24:25.830 [Pipeline] } 00:24:25.843 [Pipeline] // catchError 00:24:25.853 [Pipeline] archiveArtifacts 00:24:25.859 Archiving artifacts 00:24:25.992 [Pipeline] cleanWs 00:24:26.006 [WS-CLEANUP] Deleting project workspace... 00:24:26.006 [WS-CLEANUP] Deferred wipeout is used... 00:24:26.013 [WS-CLEANUP] done 00:24:26.015 [Pipeline] } 00:24:26.032 [Pipeline] // stage 00:24:26.039 [Pipeline] } 00:24:26.054 [Pipeline] // node 00:24:26.061 [Pipeline] End of Pipeline 00:24:26.121 Finished: SUCCESS